Saturday, February 17. 2018Environmental Devices retrospective exhibition by fabric | ch until today! | #fabric|ch #accrochage #bookNote: a few pictures from fabric | ch retrospective at #EphemeralKunsthalleLausanne (disused factory Mayer & Soutter, nearby Lausanne in Renens). The exhibition is being set up in the context of the production of a monographic book and is still open today (Saturday 17.02, 5.00-8.00 pm)!
By fabric | ch ----- - All images Ch. Guignard.
Posted by Patrick Keller
in fabric | ch, Architecture, Art, Interaction design
at
12:51
Defined tags for this entry: architects, architecture, art, artificial reality, climate, conditioning, data, design (environments), design (interactions), devices, digital, engineering, environment, experimentation, fabric | ch, interaction design, interface, interferences, monitoring, research, variable
Saturday, July 15. 2017All 234 fabric | rblg updated tags! | #fabric|ch #Summer #farniente #reading
By fabric | ch -----
As we continue to lack a decent search engine on this blog and as we don't use a "tag cloud" ... This post could help navigate through the updated content on | rblg (as of 07.2017), via all its tags!
HERE ARE ALL THE CURRENT TAGS TO NAVIGATE ON | RBLG BLOG: (to be seen just below if you're navigating on the blog's html pages or here for rss readers)
Posted by Patrick Keller
in fabric | ch
at
08:30
Defined tags for this entry: 3d, activism, advertising, agriculture, air, animation, applications, archeology, architects, architecture, art, art direction, artificial reality, artists, atmosphere, automation, behaviour, bioinspired, biotech, blog, body, books, brand, character, citizen, city, climate, clips, code, cognition, collaboration, commodification, communication, community, computing, conditioning, conferences, consumption, content, control, craft, culture & society, curators, customization, data, density, design, design (environments), design (fashion), design (graphic), design (interactions), design (motion), design (products), designers, development, devices, digital, digital fabrication, digital life, digital marketing, dimensions, direct, display, documentary, earth, ecal, ecology, economy, electronics, energy, engineering, environment, equipment, event, exhibitions, experience, experimentation, fabric | ch, farming, fashion, fiction, films, food, form, franchised, friends, function, future, gadgets, games, garden, generative, geography, globalization, goods, hack, hardware, harvesting, health, history, housing, hybrid, identification, illustration, images, information, infrastructure, installations, interaction design, interface, interferences, kinetic, knowledge, landscape, language, law, life, lighting, localization, localized, magazines, make, mapping, marketing, mashup, materials, media, mediated, mind, mining, mobile, mobility, molecules, monitoring, monography, movie, museum, music, nanotech, narrative, nature, networks, neurosciences, opensource, operating system, participative, particles, people, perception, photography, physics, physiological, politics, pollution, presence, print, privacy, product, profiling, projects, psychological, public, publishing, reactive, real time, recycling, research, resources, responsive, ressources, robotics, santé, scenography, schools, science & technology, scientists, screen, search, security, semantic, services, sharing, shopping, signage, smart, social, society, software, solar, sound, space, speculation, statement, surveillance, sustainability, tactile, tagging, tangible, targeted, teaching, technology, tele-, telecom, territory, text, textile, theory, thinkers, thinking, time, tools, topology, tourism, toys, transmission, trend, typography, ubiquitous, urbanism, users, variable, vernacular, video, viral, vision, visualization, voice, vr, war, weather, web, wireless, writing
Wednesday, October 19. 2016Le médium spirite ou la magie d’un corps hypermédiatique à l’ère de la modernité | #spirit #media #technology
Note: following the previous post that mentioned the idea of spiritism in relation to personal data, or forgotten personal data, but also in relation to "beliefs" linked to contemporary technologies, here comes an interesting symposium (Machines, magie, médias) and post on France Culture. The following post and linked talk from researcher Mireille Berton (nearby University of Lausanne, Dpt of Film History and Aesthetics) are in French.
Via France Culture -----
Cerisy : Machines, magie, médias (du 20 au 28 août 2016)
Les magiciens — de Robert-Houdin et Georges Méliès à Harry Houdini et Howard Thurston suivis par Abdul Alafrez, David Copperfield, Jim Steinmeyer, Marco Tempest et bien d’autres — ont questionné les processus de production de l’illusion au rythme des innovations en matière d’optique, d’acoustique, d’électricité et plus récemment d’informatique et de numérique. Or, toute technologie qui se joue de nos sens, tant qu’elle ne dévoile pas tous ses secrets, tant que les techniques qu'elle recèle ne sont pas maîtrisées, tant qu’elle n’est pas récupérée et formalisée par un média, reste à un stade que l’on peut définir comme un moment magique. Machines et Magie partagent, en effet, le secret, la métamorphose, le double, la participation, la médiation. Ce parti pris se fonde sur l’hypothèse avancée par Arthur C. Clarke : "Toute technologie suffisamment avancée est indiscernable de la magie" (1984, p. 36). L’émergence même des médias peut être analysée en termes d’incarnation de la pensée magique, "patron-modèle" (Edgar Morin, 1956) de la forme première de l’entendement individuel (Marcel Mauss, 1950). De facto, depuis les fantasmagories du XVIIIe siècle jusqu’aux arts numériques les plus actuels, en passant par le théâtre, la lanterne magique, la photographie, le Théâtrophone, le phonographe, la radio, la télévision et le cinéma, l’histoire des machineries spectaculaires croise celle de la magie et les expérimentations de ses praticiens, à l’affût de toute nouveauté permettant de réactualiser les effets magiques par la mécanisation des performances. C’est par l’étude des techniques d’illusion propres à chaque média, dont les principes récurrents ont été mis au jour par les études intermédiales et l’archéologie des médias, que la rencontre avec l’art magique s’est imposée. Ce colloque propose d’en analyser leur cycle technologique : le moment magique (croyance et émerveillement), le mode magique (rhétorique), la sécularisation (banalisation de la dimension magique). Ce cycle est analysé dans sa transversalité afin d’en souligner les dimensions intermédiales. Les communications sont ainsi regroupées en sept sections : L’art magique ; Magie et esthétiques de l’étonnement ; Magie, télévision et vidéo ; Les merveilles de la science ; Magie de l’image, l’image et la magie ; Magie du son, son et magie ; Du tableau vivant au mimétisme numérique. La première met en dialogue historiens et praticiens de la magie et présente un état des archives sur le sujet. Les six sections suivantes font état des corrélations: magie/médias et médias/magie.
Docteure ès Lettres, Mireille Berton est maître d’enseignement et de recherche à la Section d’Histoire et esthétique du cinéma de l'Université de Lausanne (UNIL). Ses travaux portent principalement sur les rapports entre cinéma et sciences du psychisme (psychologie, psychanalyse, psychiatrie, parapsychologie), avec un intérêt particulier pour une approche croisant histoire culturelle, épistémologie des médias et Gender Studies. Outre de nombreuses études, elle a publié un livre tiré de sa thèse de doctorat intitulé Le Corps nerveux des spectateurs. Cinéma et sciences du psychisme autour de 1900 (L’Âge d’Homme, 2015), et elle a co-dirigé avec Anne-Katrin Weber un ouvrage collectif consacré à l’histoire des dispositifs télévisuels saisie au travers de discours, pratiques, objets et représentations (La Télévision du Téléphonoscope à YouTube. Pour une archéologie de l'audiovision, Antipodes, 2009). Elle travaille actuellement sur un manuscrit consacré aux représentations du médium spirite dans les films et séries télévisées contemporains (à paraître chez Georg en 2017).
Résumé de la communication: L'intervention propose de revenir sur une question souvent traitée dans l’histoire des sciences et de l’occultisme, à savoir le rôle joué par les instruments de mesure et de capture dans l’appréhension des faits paranormaux. Une analyse de sources spirites parues durant les premières décennies du XXe siècle permet de mettre au jour les tensions provoquées par les dispositifs optiques et électriques qui viennent défier le corps tout-puissant du médium spirite sur son propre territoire. La rencontre entre occultisme et modernité donne alors naissance à la figure (discursive et fantasmatique) du médium "hypermédiatique", celui-ci surpassant toutes les possibilités offertes par les découvertes scientifiques.
Related Links:
Posted by Patrick Keller
in Culture & society, Science & technology
at
11:14
Defined tags for this entry: artificial reality, culture & society, display, history, illusion, interface, perception, science & technology, thinking
Thursday, January 28. 2016I&IC at Unfrozen, Swiss Design Network 2016 Conference | #datacenter #infrastructures #research
Note: I'll move this afternoon to Grandhotel Giessbach (sounds like a Wes Anderson movie) to present later tonight the temporary results of the research I'm jointly leading with Nicolas Nova for ECAL & HEAD - Genève, in partnership with EPFL-ECAL Lab & EPFL: Inhabiting and Interfacing the Cloud(s). Looking forward to meet the Swiss design research community (mainly) at the hotel...
Via iiclouds.org ----- Christophe Guignard and myself will have the pleasure to present the temporary results of the design research Inhabiting & Interfacing the Cloud(s) next Thursday (28.01.2016) at the Swiss Design Network conference. The conference will happen at Grandhotel Giessbach over the lake Brienz, where we'll focus on the research process fully articulated around the practice of design (with the participation of students in the case of I&IC) and the process of project. This will apparently happen between "dinner" and "bar", as we'll present a "Fireside Talk" at 9pm. Can't wait to do and see that...
The full program and proceedings (pdf) of the conference can be accessed HERE.
As for previous events, we'll try to make a short "follow up" on this documentary blog after the event.
Posted by Patrick Keller
in fabric | ch, Architecture, Interaction design, Territory
at
10:46
Defined tags for this entry: architecture, data, fabric | ch, harvesting, interaction design, interface, interferences, networks, opensource, research, speculation, territory, ubiquitous
Wednesday, October 21. 2015Create a Self-Destructing Website With This Open Source Code | #web #design
Note: suddenly speaking about web design, wouldn' it be the time to start again doing some interaction design on the web? Aren't we in need of some "net art" approach, some weirder propositions than the too slick "responsive design" of a previsible "user-centered" or even "experience" design dogma? These kind of complex web/interaction experiences almost all vanished (remember Jodi?) To the point that there is now a vast experimental void for designers to tap again into! Well, after the site that can only be browsed by one person at a time (with a poor visual design indeed), here comes the one that self destruct itself. Could be a start... Btw, thinking about files, sites or contents, etc. that would self destruct themsleves would probably help save lots of energy in data storage, hard drives and datacenters of all sorts, where these data sits like zombies.
Via GOOD ----- By Isis Madrid
Former head of product at Flickr and Bitly, Matt Rothenberg recently caused an internet hubbub with his Unindexed project. The communal website continuously searched for itself on Google for 22 days, at which point, upon finding itself, spontaneously combusted.
In addition to chasing its own tail on Google, Unindexed provided a platform for visitors to leave comments and encourage one another to spread the word about the website. According to Rothenberg, knowledge of the website was primarily passed on in the physical world via word of mouth. “Part of the goal with the project was to create a sense of unease with the participants—if they liked it, they could and should share it with others, so that the conversation on the site could grow,” Rothenberg told Motherboard. “But by doing so they were potentially contributing to its demise via indexing, as the more the URL was out there, the faster Google would find it.” When the website finally found itself on Google, the platform disappeared and this message replaced it:
If you are interested in creating a similar self-destructing site, feel free to start with Rothenberg’s open source code.
Posted by Patrick Keller
in Culture & society, Interaction design, Sustainability
at
09:54
Defined tags for this entry: culture & society, experience, interaction design, interface, internet, networks, perception, sustainability, web
Sunday, December 14. 2014I&IC workshop #3 at ECAL: output > Networked Data Objects & Devices | #data #things
Via iiclouds.org ----- The third workshop we ran in the frame of I&IC with our guest researcher Matthew Plummer-Fernandez (Goldsmiths University) and the 2nd & 3rd year students (Ba) in Media & Interaction Design (ECAL) ended last Friday (| rblg note: on the 21st of Nov.) with interesting results. The workshop focused on small situated computing technologies that could collect, aggregate and/or “manipulate” data in automated ways (bots) and which would certainly need to heavily rely on cloud technologies due to their low storage and computing capacities. So to say “networked data objects” that will soon become very common, thanks to cheap new small computing devices (i.e. Raspberry Pis for diy applications) or sensors (i.e. Arduino, etc.) The title of the workshop was “Botcave”, which objective was explained by Matthew in a previous post. The choice of this context of work was defined accordingly to our overall research objective, even though we knew that it wouldn’t address directly the “cloud computing” apparatus — something we learned to be a difficult approachduring the second workshop –, but that it would nonetheless question its interfaces and the way we experience the whole service. Especially the evolution of this apparatus through new types of everyday interactions and data generation.
Matthew Plummer-Fernandez (#Algopop) during the final presentation at the end of the research workshop.
Through this workshop, Matthew and the students definitely raised the following points and questions: 1° Small situated technologies that will soon spread everywhere will become heavy users of cloud based computing and data storage, as they have low storage and computing capacities. While they might just use and manipulate existing data (like some of the workshop projects — i.e. #Good vs. #Evil or Moody Printer) they will altogether and mainly also contribute to produce extra large additional quantities of them (i.e. Robinson Miner). Yet, the amount of meaningful data to be “pushed” and “treated” in the cloud remains a big question mark, as there will be (too) huge amounts of such data –Lucien will probably post something later about this subject: “fog computing“–, this might end up with the need for interdisciplinary teams to rethink cloud architectures. 2° Stored data are becoming “alive” or significant only when “manipulated”. It can be done by “analog users” of course, but in general it is now rather operated by rules and algorithms of different sorts (in the frame of this workshop: automated bots). Are these rules “situated” as well and possibly context aware (context intelligent) –i.e.Robinson Miner? Or are they somehow more abstract and located anywhere in the cloud? Both? 3° These “Networked Data Objects” (and soon “Network Data Everything”) will contribute to “babelize” users interactions and interfaces in all directions, paving the way for new types of combinations and experiences (creolization processes) — i.e. The Beast, The Like Hotline, Simon Coins, The Wifi Cracker could be considered as starting phases of such processes–. Cloud interfaces and computing will then become everyday “things” and when at “house”, new domestic objects with which we’ll have totally different interactions (this last point must still be discussed though as domesticity might not exist anymore according to Space Caviar).
Moody Printer – (Alexia Léchot, Benjamin Botros) Moody Printer remains a basic conceptual proposal at this stage, where a hacked printer, connected to a Raspberry Pi that stays hidden (it would be located inside the printer), has access to weather information. Similarly to human beings, its “mood” can be affected by such inputs following some basic rules (good – bad, hot – cold, sunny – cloudy -rainy, etc.) The automated process then search for Google images according to its defined “mood” (direct link between “mood”, weather conditions and exhaustive list of words) and then autonomously start to print them. A different kind of printer combined with weather monitoring.
The Beast – (Nicolas Nahornyj) Top: Nicolas Nahornyj is presenting his project to the assembly. Bottom: the laptop and “the beast”. The Beast is a device that asks to be fed with money at random times… It is your new laptop companion. To calm it down for a while, you must insert a coin in the slot provided for that purpose. If you don’t comply, not only will it continue to ask for money in a more frequent basis, but it will also randomly pick up an image that lie around on your hard drive, post it on a popular social network (i.e. Facebook, Pinterest, etc.) and then erase this image on your local disk. Slowly, The Beast will remove all images from your hard drive and post them online… A different kind of slot machine combined with private files stealing.
Robinson – (Anne-Sophie Bazard, Jonas Lacôte, Pierre-Xavier Puissant) Top: Pierre-Xavier Puissant is looking at the autonomous “minecrafting” of his bot. Bottom: the proposed bot container that take on the idea of cubic construction. It could be placed in your garden, in one of your room, then in your fridge, etc. Robinson automates the procedural construction of MineCraft environments. To do so, the bot uses local weather information that is monitored by a weather sensor located inside the cubic box, attached to a Raspberry Pi located within the box as well. This sensor is looking for changes in temperature, humidity, etc. that then serve to change the building blocks and rules of constructions inside MineCraft (put your cube inside your fridge and it will start to build icy blocks, put it in a wet environment and it will construct with grass, etc.) A different kind of thermometer combined with a construction game. Note: Matthew Plummer-Fernandez also produced two (auto)MineCraft bots during the week of workshop. The first one is building environment according to fluctuations in the course of different market indexes while the second one is trying to build “shapes” to escape this first envirnment. These two bots are downloadable from theGithub repository that was realized during the workshop.
#Good vs. #Evil – (Maxime Castelli) Top: a transformed car racing game. Bottom: a race is going on between two Twitter hashtags, materialized by two cars. #Good vs. #Evil is a quite straightforward project. It is also a hack of an existing two racing cars game. Yet in this case, the bot is counting iterations of two hashtags on Twitter: #Good and #Evil. At each new iteration of one or the other word, the device gives an electric input to its associated car. The result is a slow and perpetual race car between “good” and “evil” through their online hashtags iterations. A different kind of data visualization combined with racing cars.
The “Like” Hotline – (Mylène Dreyer, Caroline Buttet, Guillaume Cerdeira) Top: Caroline Buttet and Mylène Dreyer are explaining their project. The screen of the laptop, which is a Facebook account is beamed on the left outer part of the image. Bottom: Caroline Buttet is using a hacked phone to “like” pages. The “Like” Hotline is proposing to hack a regular phone and install a hotline bot on it. Connected to its online Facebook account that follows a few personalities and the posts they are making, the bot ask questions to the interlocutor which can then be answered by using the keypad on the phone. After navigating through a few choices, the bot hotline help you like a post on the social network. A different kind of hotline combined with a social network.
Simoncoin – (Romain Cazier) Top: Romain Cazier introducing its “coin” project. Bottom: the device combines an old “Simon” memory game with the production of digital coins. Simoncoin was unfortunately not finished at the end of the week of workshop but was thought out in force details that would be too long to explain in this short presentation. Yet the main idea was to use the game logic of Simon to generate coins. In a parallel to the Bitcoins that are harder and harder to mill, Simon Coins are also more and more difficult to generate due to the game logic. Another different kind of money combined with a memory game.
The Wifi Cracker – (Bastien Girshig, Martin Hertig) Top: Bastien Girshig and Martin Hertig (left of Matthew Plummer-Fernandez) presenting. Middle and Bottom: the wifi password cracker slowly diplays the letters of the wifi password. The Wifi Cracker is an object that you can independently leave in a space. It furtively looks a little bit like a clock, but it won’t display the time. Instead, it will look for available wifi networks in the area and start try to find their protected password (Bastien and Martin found a ready made process for that). The bot will test all possible combinations and it will take time. Once the device will have found the working password, it will use its round display to transmit the password. Letter by letter and slowly as well. A different kind of cookoo clock combined with a password cracker.
Acknowledgments: Lots of thanks to Matthew Plummer-Fernandez for its involvement and great workshop direction; Lucien Langton for its involvment, technical digging into Raspberry Pis, pictures and documentation; Nicolas Nova and Charles Chalas (from HEAD) so as Christophe Guignard, Christian Babski and Alain Bellet for taking part or helping during the final presentation. A special thanks to the students from ECAL involved in the project and the energy they’ve put into it: Anne-Sophie Bazard, Benjamin Botros, Maxime Castelli, Romain Cazier, Guillaume Cerdeira, Mylène Dreyer, Bastien Girshig, Jonas Lacôte, Alexia Léchot, Nicolas Nahornyj, Pierre-Xavier Puissant.
From left to right: Bastien Girshig, Martin Hertig (The Wifi Cracker project), Nicolas Nova, Matthew Plummer-Fernandez (#Algopop), a “mystery girl”, Christian Babski (in the background), Patrick Keller, Sebastian Vargas, Pierre Xavier-Puissant (Robinson Miner), Alain Bellet and Lucien Langton (taking the pictures…) during the final presentation on Friday.
Posted by Patrick Keller
in Interaction design
at
14:44
Defined tags for this entry: behaviour, code, computing, data, design (interactions), designers, devices, interaction design, interface, networks, research, robotics, teaching, ubiquitous
Friday, November 21. 2014I&IC workshop at ECAL: The birth of Botcaves | #iiclouds #designresearch #bots
Note: the workshop continues and should finish today. We'll document and publish results next week. As the workshop is all about small size and situated computing, Lucien Langton (assistant on the project) made a short tutorial about the way to set up your Pi. I'll also publish the Github repository that Matthew Plummer-Fernandez has set up.
Via iiclouds.org -----
The Bots are running! The second workshop of I&IC’s research study started yesterday with Matthew’s presentation to the students. A video of the presentation might be included in the post later on, but for now here’s the [pdf]: Botcaves First prototypes setup by the students include bots playing Minecraft, bots cracking wifi’s, bots triggered by onboard IR Cameras. So far, some groups worked directly with Python scripts deployed via SSH into the Pi’s, others established a client-server connection between their Mac and their Pi by installing Processing on their Raspberry and finally some decided to start by hacking hardware to connect to their bots later. The research process will be continuously documented during the week.
The Wifi cracking Bot
Hacking a phone
Connecting to Pi via Proce55ing
Posted by Patrick Keller
in Design, Interaction design
at
08:24
Defined tags for this entry: artificial reality, data, design, devices, farming, interaction design, interface, robotics, tangible, teaching
Friday, October 17. 2014“Hello, Computer” – Intel’s New Mobile Chips Are Always Listening | #monitoring #always
Note: are we all on our way, not to LA, but to HER... ?
----- Tablets and laptops coming later this year will be able to constantly listen for voice commands thanks to new chips from Intel. By Tom Simonite
New processors: A silicon wafer etched with Intel’s Core M mobile chips.
A new line of mobile chips unveiled by Intel today makes it possible to wake up a laptop or tablet simply by saying “Hello, computer.” Once it has been awoken, the computer can operate as a voice-controlled virtual assistant. You might call out “Hello, computer, what is the weather forecast today?” while getting out of bed. Tablets and lightweight laptops based on the new Core M line of chips will go on sale at the end of this year. They can constantly listen for voice instructions thanks to a component known as a digital signal processor core that’s dedicated to processing audio with high efficiency and minimal power use. “It doesn’t matter what state the system will be in, it will be listening all the time,” says Ed Gamsaragan, an engineer at Intel. “You could be actively doing work or it could be in standby.” It is possible to set any two- or three-word phrase to rouse a computer with a Core M chip. A device can also be trained to respond only to a specific voice. The voice-print feature isn’t accurate enough to replace a password, but it could prevent a device from being accidentally woken up, says Gamsaragan. If coupled with another biometric measure, such as webcam with facial recognition, however, a voice command could work as a security mechanism, he says. Manufacturers will decide how to implement the voice features in Intel’s Core M chips in devices that will appear on shelves later this year. The wake-on-voice feature is compatible with any operating system. That means it could be possible to summon Microsoft’s virtual assistant Cortana in Windows, or Google’s voice search functions in Chromebook devices. The only mobile device on the market today that can constantly listen for commands is the Moto X smartphone from Motorola (see “The Era of Ubiquitous Listening Dawns”). It has a dedicated audio chip that constantly listens for the command “OK, Google,” which activates the Google search app. Intel’s Core M chips are based on the company’s new generation of smaller transistors, with features as small as 14 nanometers. This new architecture makes chips more power efficient and cooler than earlier generations, so Core M devices don’t require cooling fans. Intel says that the 14-nanometer architecture will make it possible to make laptops and tablets much thinner than they are today. This summer the company showed off a prototype laptop that is only 7.2 millimeters (0.28 inches) thick. That’s slightly thinner than Apple’s iPad Air, which is 7.5 millimeters thick, but Intel’s prototype packed considerably more computing power.
Posted by Patrick Keller
in Culture & society, Interaction design, Science & technology
at
08:06
Defined tags for this entry: artificial reality, culture & society, digital life, hardware, interaction design, interface, participative, robotics, science & technology, voice
Wednesday, October 15. 2014Town Built for Driverless Cars | #automated
Note: after the zoning for drones within cities, will we develop them with specific "city marks" dedicated for driverless cars? It reminds me a bit of this design research project done a few years ago, The New Robot Domesticity, which purpose was to design objects so that robots could also recognized/use them. Further away, it also remind me of a workshop we organized at the ECAL back in 2005 with researcher Frederic Kaplan (now head of Digital Humanities at EPFL) which purpose was to design artefacts for the Sony Aibo (a doc. video here). This later prtoject was realized in the frame of the research project Variable Environment.
----- Tricky intersections and rogue mechanical pedestrians will provide a testing area for automated and connected cars. By Will Knight
The site of Ann Arbor’s driverless town, currently under construction.
A mocked-up set of busy streets in Ann Arbor, Michigan, will provide the sternest test yet for self-driving cars. Complex intersections, confusing lane markings, and busy construction crews will be used to gauge the aptitude of the latest automotive sensors and driving algorithms; mechanical pedestrians will even leap into the road from between parked cars so researchers can see if they trip up onboard safety systems. The urban setting will be used to create situations that automated driving systems have struggled with, such as subtle driver-pedestrian interactions, unusual road surfaces, tunnels, and tree canopies, which can confuse sensors and obscure GPS signals. “If you go out on the public streets you come up against rare events that are very challenging for sensors,” says Peter Sweatman, director of the University of Michigan’s Mobility Transformation Center, which is overseeing the project. “Having identified challenging scenarios, we need to re-create them in a highly repeatable way. We don’t want to be just driving around the public roads.” Google and others have been driving automated cars around public roads for several years, albeit with a human ready to take the wheel if necessary. Most automated vehicles use accurate digital maps and satellite positioning, together with a suite of different sensors, to navigate safely. Highway driving, which is less complex than city driving, has proved easy enough for self-driving cars, but busy downtown streets—where cars and pedestrians jockey for space and behave in confusing and surprising ways—are more problematic. “I think it’s a great idea,” says John Leonard, a professor at MIT who led the development of a self-driving vehicle for a challenge run by DARPA in 2007. “It is important for us to try to collect statistically meaningful data about the performance of self-driving cars. Repeated operations—even in a small-scale environment—can yield valuable data sets for testing and evaluating new algorithms.” The simulation is being built on the edge of the University of Michigan’s campus with funding from the Michigan Department of Transportation and 13 companies involved with developing automated driving technology. It is scheduled to open next spring. It will consist of four miles of roads with 13 different intersections. Even Google, which has an ambitious vision of vehicle automation, acknowledges that urban driving is a significant challenge. Speaking at an event in California this July, Chris Urmson, who leads the company’s self-driving car project, said several common urban situations remain thorny (see “Urban Jungle a Tough Challenge for Google’s Autonomous Car”). Speaking with MIT Technology Review last month, Urmson gave further details about as-yet-unsolved scenarios (see “Hidden Obstacles for Google’s Self-Driving Cars”).
Such challenges notwithstanding, the first automated cars will go into production shortly. General Motors announced last month that a 2017 Cadillac will be the first car to offer entirely automated driving on highways. It’s not yet clear how the system will work—for example, how it will ensure that the driver isn’t too distracted to take the wheel in an emergency, or under what road conditions it might refuse to take the wheel—but in some situations, the car’s Super Cruise system will take care of steering, braking, and accelerating. Another technology to be tested in the simulated town is vehicle-to-vehicle communications. The University of Michigan recently concluded a government-funded study in Ann Arbor involving thousands of vehicles equipped with transmitters that broadcast position, direction of travel, speed, and other information to other vehicles and to city infrastructure. The trial showed that vehicle-to-vehicle and vehicle-to-infrastructure communications could prevent many common accidents by providing advanced warning of a possible collision. “One of the interesting things, from our point of view, is what extra value you get by combining” automation and car-to-car communications, Sweatman says. “What happens when you put the two together—how much faster can you deploy it?”
Posted by Patrick Keller
in Design, Science & technology, Territory
at
13:35
Defined tags for this entry: artificial reality, design, digital life, interface, interferences, perception, robotics, science & technology, territory, urbanism
Wednesday, April 02. 2014Why Her will dominate UI design ... (as an anti Minority Report) | #interaction #interface #ai
Via Wired -----
The future we see in Her is one where technology has dissolved into everyday life.
A few weeks into the making of Her, Spike Jonze’s new flick about romance in the age of artificial intelligence, the director had something of a breakthrough. After poring over the work of Ray Kurzweil and other futurists trying to figure out how, exactly, his artificially intelligent female lead should operate, Jonze arrived at a critical insight: Her, he realized, isn’t a movie about technology. It’s a movie about people. With that, the film took shape. Sure, it takes place in the future, but what it’s really concerned with are human relationships, as fragile and complicated as they’ve been from the start. Of course on another level Her is very much a movie about technology. One of the two main characters is, after all, a consciousness built entirely from code. That aspect posed a unique challenge for Jonze and his production team: They had to think like designers. Assuming the technology for AI was there, how would it operate? What would the relationship with its “user” be like? How do you dumb down an omniscient interlocutor for the human on the other end of the earpiece? When AI is cheap, what does all the other technology look like?
For production designer KK Barrett, the man responsible for styling the world in which the story takes place, Her represented another sort of design challenge. Barrett’s previously brought films like Lost in Translation, Marie Antoinette, and Where the Wild Things Are to life, but the problem here was a new one, requiring more than a little crystal ball-gazing. The big question: In a world where you can buy AI off the shelf, what does all the other technology look like?
Technology Shouldn’t Feel Like TechnologyOne of the first things you notice about the “slight future” of Her, as Jonze has described it, is that there isn’t all that much technology at all. The main character Theo Twombly, a writer for the bespoke love letter service BeautifulHandwrittenLetters.com, still sits at a desktop computer when he’s at work, but otherwise he rarely has his face in a screen. Instead, he and his fellow future denizens are usually just talking, either to each other or to their operating systems via a discrete earpiece, itself more like a fancy earplug anything resembling today’s cyborgian Bluetooth headsets. In this “slight future” world, things are low-tech everywhere you look. The skyscrapers in this futuristic Los Angeles haven’t turned into towering video billboards a la Blade Runner; they’re just buildings. Instead of a flat screen TV, Theo’s living room just has nice furniture. This is, no doubt, partly an aesthetic concern; a world mediated through screens doesn’t make for very rewarding mise en scene. But as Barrett explains it, there’s a logic to this technological sparseness. “We decided that the movie wasn’t about technology, or if it was, that the technology should be invisible,” he says. “And not invisible like a piece of glass.” Technology hasn’t disappeared, in other words. It’s dissolved into everyday life. Here’s another way of putting it. It’s not just that Her, the movie, is focused on people. It also shows us a future where technology is more people-centric. The world Her shows us is one where the technology has receded, or one where we’ve let it recede. It’s a world where the pendulum has swung back the other direction, where a new generation of designers and consumers have accepted that technology isn’t an end in itself–that it’s the real world we’re supposed to be connecting to. (Of course, that’s the ideal; as we see in the film, in reality, making meaningful connections is as difficult as ever.)
![]() Theo still has a desktop display at work and at home, but elsewhere technology is largely invisible.
Theo Twombly still sits at a desktop computer when he’s at work, but otherwise he rarely has his face in a screen.
Jonze had help in finding the contours of this slight future, including conversations with designers from New York-based studio Sagmeister & Walsh and an early meeting with Elizabeth Diller and Ricardo Scofidio, principals at architecture firm DS+R. As the film’s production designer, Barrett was responsible for making it a reality. Throughout that process, he drew inspiration from one of his favorite books, a visual compendium of futuristic predictions from various points in history. Basically, the book reminded Barrett what not to do. “It shows a lot of things and it makes you laugh instantly, because you say, ‘those things never came to pass!’” he explains. “But often times, it’s just because they over-thought it. The future is much simpler than you think.” That’s easy to say in retrospect, looking at images of Rube Goldbergian kitchens and scenes of commute by jet pack. But Jonze and Barrett had the difficult task of extrapolating that simplification forward from today’s technological moment. Theo’s home gives us one concise example. You could call it a “smart house,” but there’s little outward evidence of it. What makes it intelligent isn’t the whizbang technology but rather simple, understated utility. Lights, for example, turn off and on as Theo moves from room to room. There’s no app for controlling them from the couch; no control panel on the wall. It’s all automatic. Why? “It’s just a smart and efficient way to live in a house,” says Barrett. Today’s smartphones were another object of Barrett’s scrutiny. “They’re advanced, but in some ways they’re not advanced whatsoever,” he says. “They need too much attention. You don’t really want to be stuck engaging them. You want to be free.” In Barrett’s estimation, the smartphones just around the corner aren’t much better. “Everyone says we’re supposed to have a curved piece of flexible glass. Why do we need that? Let’s make it more substantial. Let’s make it something that feels nice in the hand.”
![]() Theo’s smartphone was designed to be “substantial,” something that first and foremost “feels good in the hand.”
Theo’s phone in the film is just that–a handsome hinged device that looks more like an art deco cigarette case than an iPhone. He uses it far less frequently than we use our smartphones today; it’s functional, but it’s not ubiquitous. As an object, it’s more like a nice wallet or watch. In terms of industrial design, it’s an artifact from a future where gadgets don’t need to scream their sophistication–a future where technology has progressed to the point that it doesn’t need to look like technology. All of these things contribute to a compelling, cohesive vision of the future–one that’s dramatically different from what we usually see in these types of movies. You could say that Her is, in fact, a counterpoint to that prevailing vision of the future–the anti-Minority Report. Imagining its world wasn’t about heaping new technology on society as we know it today. It was looking at those places where technology could fade into the background, integrate more seamlessly. It was about envisioning a future, perhaps, that looked more like the past. “In a way,” says Barrett, “my job was to undesign the design.”
The Holy Grail: A Discrete User InterfaceThe greatest act of undesigning in Her, technologically speaking, comes with the interface used throughout the film. Theo doesn’t touch his computer–in fact, while he has a desktop display at home and at work, neither have a keyboard. Instead, he talks to it. “We decided we didn’t want to have physical contact,” Barrett says. “We wanted it to be natural. Hence the elimination of software keyboards as we know them.” Again, voice control had benefits simply on the level of moviemaking. A conversation between Theo and Sam, his artificially intelligent OS, is obviously easier for the audience to follow than anything involving taps, gestures, swipes or screens. But the voice-based UI was also a perfect fit for a film trying to explore what a less intrusive, less demanding variety of technology might look like.
Indeed, if you’re trying to imagine a future where we’ve managed to liberate ourselves from screens, systems based around talking are hard to avoid. As Barrett puts it, the computers we see in Her “don’t ask us to sit down and pay attention” like the ones we have today. He compares it to the fundamental way music beats out movies in so many situations. Music is something you can listen to anywhere. It’s complementary. It lets you operate in 360 degrees. Movies require you to be locked into one place, looking in one direction. As we see in the film, no matter what Theo’s up to in real life, all it takes to bring his OS into the fold is to pop in his ear plug. Looking at it that way, you can see the audio-based interface in Her as a novel form of augmented reality computing. Instead of overlaying our vision with a feed, as we’ve typically seen it, Theo gets a one piped into his ear. At the same time, the other ear is left free to take in the world around him. Barrett sees this sort of arrangement as an elegant end point to the trajectory we’re already on. Think about what happens today when we’re bored at the dinner table. We check our phones. At the same time, we realize that’s a bit rude, and as Barrett sees it, that’s one of the great promises of the smartwatch: discretion. “They’re a little more invisible. A little sneakier,” he says. Still, they’re screens that require eyeballs. Instead, Barrett says, “imagine if you had an ear plug in and you were getting your feed from everywhere.” Your attention would still be divided, but not nearly as flagrantly.
Of course, a truly capable voice-based UI comes with other benefits. Conversational interfaces make everything easier to use. When every different type of device runs an OS that can understand natural language, it means that every menu, every tool, every function is accessible simply by requesting it. That, too, is a trend that’s very much alive right now. Consider how today’s mobile operating systems, like iOS and ChromeOS, hide the messy business of file systems out of sight. Theo, with his voice-based valet as intermediary, is burdened with even less under-the-hood stuff than we are today. As Barrett puts it: “We didn’t want him fiddling with things and fussing with things.” In other words, Theo lives in a future where everything, not just his iPad, “just works.” Theo lives in a future where everything, not just his iPad, “just works.”
AI: the ultimate UX challengeThe central piece of invisible design in Her, however, is that of Sam, the artificially intelligent operating system and Theo’s eventual romantic partner. Their relationship is so natural that it’s easy to forget she’s a piece of software. But Jonze and company didn’t just write a girlfriend character, label it AI, and call it a day. Indeed, much of the film’s dramatic tension ultimately hinges not just on the ways artificial intelligence can be like us but the ways it cannot. Much of Sam’s unique flavor of AI was written into the script by Jonze himself. But her inclusion led to all sorts of conversations among the production team about the nature of such a technology. “Anytime you’re dealing with trying to interact with a human, you have to think of humans as operating systems. Very advanced operating systems. Your highest goal is to try to emulate them,” Barrett says. Superficially, that might mean considering things like voice pattern and sensitivity and changing them based on the setting or situation. Even more quesitons swirled when they considered how an artificially intelligent OS should behave. Are they a good listener? Are they intuitive? Do they adjust to your taste and line of questioning? Do they allow time for you to think? As Barrett puts it, “you don’t want a machine that’s always telling you the answer. You want one that approaches you like, ‘let’s solve this together.’” In essence, it means that AI has to be programmed to dumb itself down. “I think it’s very important for OSes in the future to have a good bedside manner.” Barrett says. “As politicians have learned, you can’t talk at someone all the time. You have to act like you’re listening.”
![]() AI’s killer app, as we see in the film, is the ability to adjust to the emotional state of its user.
As we see in the film, though, the greatest asset of AI might be that it doesn’t have one fixed personality. Instead, its ability to figure out what a person needs at a given moment emerges as the killer app. Theo, emotionally desolate in the midst of a hard divorce, is having a hard time meeting people, so Sam goads him into going on a blind date. When Theo’s friend Amy splits up with her husband, her own artificially intelligent OS acts as a sort of therapist. “She’s helping me work through some things,” Amy says of her virtual friend at one point. In our own world, we may be a long way from computers that are able to sense when we’re blue and help raise our spirits in one way or another. But we’re already making progress down this path. In something as simple as a responsive web layout or iOS 7′s “Do Not Disturb” feature, we’re starting to see designs that are more perceptive about the real world context surrounding them–where or how or when they’re being used. Google Now and other types of predictive software are ushering in a new era of more personalized, more intelligent apps. And while Apple updating Siri with a few canned jokes about her Hollywood counterpart might not amount to a true sense of humor, it does serve as another example of how we’re making technology more human–a preoccupation that’s very much alive today.
Personal comment:
While I do agree with the idea that technology is becoming in some ways banal --or maybe, to use a better word, just common-- and that the future might not be about flying cars, fancy allover hologram interfaces or backup video cities populated with personal clones), that it might be "in service of", will "vanish" or "recede" into our daily atmospheres, environments, architectures, furnitures, clothes if not bodies or cells, we have to keep in mind that this could (will) make it even more intrusive.
Posted by Patrick Keller
in Culture & society, Interaction design
at
07:51
Defined tags for this entry: artificial reality, culture & society, interaction design, interface, movie, narrative, presence, psychological, social, speculation, users
(Page 1 of 7, totaling 62 entries)
» next page
|
fabric | rblgThis blog is the survey website of fabric | ch - studio for architecture, interaction and research. We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings. Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations. This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.
QuicksearchCategoriesCalendar
| rblg on TwitterSyndicate This BlogBlog Administration |