Friday, November 21. 2014
Note: a message from Matthew on Tuesday about his ongoing I&IC workshop. More resources to come there by the end of the week, as students are looking into many different directions!
Note: the workshop continues and should finish today. We'll document and publish results next week. As the workshop is all about small size and situated computing, Lucien Langton (assistant on the project) made a short tutorial about the way to set up your Pi. I'll also publish the Github repository that Matthew Plummer-Fernandez has set up.
The Bots are running! The second workshop of I&IC’s research study started yesterday with Matthew’s presentation to the students. A video of the presentation might be included in the post later on, but for now here’s the [pdf]: Botcaves
First prototypes setup by the students include bots playing Minecraft, bots cracking wifi’s, bots triggered by onboard IR Cameras. So far, some groups worked directly with Python scripts deployed via SSH into the Pi’s, others established a client-server connection between their Mac and their Pi by installing Processing on their Raspberry and finally some decided to start by hacking hardware to connect to their bots later.
The research process will be continuously documented during the week.
Posted by Patrick Keller in Design, Interaction design at 08:24
Defined tags for this entry: artificial reality, data, design, devices, farming, interaction design, interface, robotics, tangible
Wednesday, November 19. 2014
| rblg note: following my previous post about the design research project we are leading with Nicolas Nova, a workshop is going on this week at the ECAL with our guest contributor Matthew Plummer-Frenandez (aka #Algopop). I'll reblog here during the coming days what's happening on our parallel blog (iiclouds.org)
Note: I publish here the brief that Matthew Plummer-Fernandez (a.k.a. Algopop) sent me before the workshop he’ll lead next week (17-21.11) with Media & Interaction Design students from 2nd and 3rd year Ba at the ECAL.
This workshop will take place in the frame of the I&IC research project, for which we had the occasion to exchange together prior to the workshop. It will investigate the idea of very low power computing, situated processing, data sensing/storage and automatized data treatment (“bots”) that could be highly distributed into everyday life objects or situations. While doing so, the project will undoubtedly address the idea of “networked objects”, which due to the low capacities of their computing parts will become major consumers of cloud based services (computing power, storage). Yet, following the hypothesis of the research, what kind of non-standard networked objects/situations based on what king of decentralized, personal cloud architecture?
The subject of this workshop explains some recent posts that could serve as resources or tools for this workshop, as the students will work around personal “bots” that will gather, process, host and expose data.
Stay tuned for more!
Botcaves (by Matthew Plummer-Fernandez)
Algorithmic and autonomous software agents known as bots are increasingly participating in everyday life. Bots can potentially gather data from both physical and digital activity, store and share data in the ‘cloud’, and develop ways to communicate and learn from their databases. In essence bots can animate data, making it useful, interactive, visual or legible. Bots although software-based require hardware from which to run from, and it is this underexplored crossover between the physical and digital presence of bots that this workshop investigates.
You will be asked to design a physical ‘housing’ or ‘interface’, either bespoke or hacked from existing objects, for your personal bots to run from. These botcaves would be present in the home, workspace or other, permitting novel interactions between the digital and physical environments that these bots inhabit.
Raspberry Pis, template bot code, APIs, cloud storage, existing services (Twitter, IFTTT, etc) and physical elements (sensors, lights, cameras, etc) may be used in the workshop.
British/ Colombian Artist and Designer Matthew Plummer-Fernandez makes work that critically and playfully examines sociocultural entanglements with technologies. His current interests span algorithms, bots, automation, copyright, 3D files and file-sharing. He was awarded a Prix Ars Electronica Award of Distinction for the project Disarming Corruptor; an app for disguising 3D Print files as glitched artefacts. He is also known for his computational approach to aesthetics translated into physical sculpture.
For research purposes he runs Algopop, a popular tumblr that documents the emergence of algorithms in everyday life as well as the artists that respond to this context in their work. This has become the starting point to a practice-based PhD funded by the AHRC at Goldsmiths, University of London, where he is also a research associate at the Interaction Research Studio and a visiting tutor. He holds a BEng in Computer Aided Mechanical Engineering from Kings College London and an MA in Design Products from the Royal College of Art.
Thursday, November 13. 2014
By fabric | ch
I'm very happy to write that after several months of preparation, I'm leading a new design-research (that follows Variable Environment, dating back from 2007!) for the University of Art & design, Lausanne (ECAL), in partnership with Nicolas Nova (HEAD). The project will see the transversal collaboration of architects, interaction designers, ethnographers and scientists with the aim of re-investigating "cloud computing" and its infrastructures from a different point of view. The name of the project: Inhabiting and Interfacing the Cloud(s), which is now online under the form of a blog that will document our progresses. The project should last until 2016.
The main research team is composed of:
Patrick Keller, co-head (Prof. ECAL M&ID, fabric | ch) / Nicolas Nova, co-head (Prof. HEAD MD, Near Future Laboratory) / Christophe Guignard (Prof. ECAL M&ID, fabric | ch) / Lucien Langton (assistant ECAL M&ID) / Charles Chalas (assistant HEAD MD) / Dieter Dietz (Prof. EPFL - Alice) & Caroline Dionne (Post-doc EPFL - Alice) / Dr. Christian Babski (fabric | ch).
I&IC Workshops with students from the HEAD, ECAL (interaction design) and EPFL (architecture) will be conducted by:
James Auger (Prof. RCA, Auger - Loizeau) / Matthew Plummer-Fernandez (Visiting Tutor Goldsmiths College, Algopop) / Thomas Favre - Bulle (Lecturer EPFL).
Finally, a group of "advisors" will keep an eye on us and the research artifacts we may produce:
Babak Falsafi (Prof. EPFL - Ecocloud) / Prof. Zhang Ga (TASML, Tsinghua University) / Dan Hill (City of Sound, Future Cities Catapult) / Ludger Hovestadt (Prof. ETHZ - CAAD) / Geoff Manaugh (BLDGBLOG, Gizmodo).
Andrea Branzi, 1969, Research for "No-Stop City".
Google data center in Lenoir, North Carolina (USA), 2013.
As stated on the I&IC webiste:
The design research I&IC (Inhabiting and Interfacing the Clouds), explores the creation of counter-proposals to the current expression of “Cloud Computing”, particularly in its forms intended for private individuals and end users (“Personal Cloud”). It is led by Profs. Patrick Keller (ECAL) and Nicolas Nova (HEAD) and is documented online as a work in progress, 2014-2017.
I&IC is to offer an alternative point of view, a critical appraisal as well as to provide an “access to tools” about this iconic infrastructure of our modernity and its user interfaces, because to date their implementation has followed a logic chiefly of technical development, mainly governed by corporate interests, and continues therefore to be paradoxically envisioned as a purely functional, centralized setup.
However, the Personal Cloud holds a potential that is largely untapped in terms of design, novel uses and territorial strategies. Through its cross-disciplinary approach that links interaction design, the architectural and territorial dimensions as well as ethnographic studies, our project aims at producing alternative models resulting from a more contemporary approach, notably factoring in the idea of creolization (theorized by E. Glissant).
Wednesday, November 12. 2014
Note: an interesting new publication and project by Space Caviar (Joseph Grima --former Storefront for Art & Architecture, Domus, Adhocracy exhibition, etc.--, Tamar Shafir, Andrea Bagnato, Giulia Finazzi, Martina Muzi, Simone C. Niquille, Giulia Grattarola) about the changing nature of "home" under the pressure of "multiple forces" (if domesticity does, indeed, still exists as the authors state it). Interestingly, some data files and charts used in the books are made oublicly available via a Github. Reminds me somehow of recorder data about a public project we made available on the site of the project (Heterochrony), back in 2012.
Via Space Caviar
The way we live is rapidly changing under pressure from multiple forces—financial, environmental, technological, geopolitical. What we used to call home may not even exist anymore, having transmuted into a financial commodity measured in square meters, or sqm. Yet, domesticity ceased long ago to be central in the architectural agenda; this project aims to launch a new discussion on the present and the future of the home.
SQM: The Quantified Home, produced for the 2014 Biennale Interieur, charts the scale of this change using data, fiction, and a critical selection of homes and their interiors—from Osama bin Laden’s compound to apartment living in the age of Airbnb.
With original texts by: Rahel Aima, Aristide Antonas, Gabrielle Brainard and Jacob Reidel, Keller Easterling, Ignacio González Galán, Joseph Grima, Hilde Heynen, Dan Hill, Sam Jacob, Alexandra Lange, Justin McGuirk, Joanne McNeil, Alessandro Mendini, Jonathan Olivares, Marina Otero Verzier, Beatriz Preciado, Anna Puigjaner, Catharine Rossi, Andreas Ruby, Malkit Shoshan, and Bruce Sterling.
The book is published by Lars Müller, and will be available for sale worldwide from November 2014. The dust jacket is screen-printed on wallpaper in 22 different patterns, randomly mixed.
Download the table of contents
Friday, October 24. 2014
An interesting new project called Satellite Lamps, by Einar Sneve Martinussen, Jørn Knutsen, and Timo Arnall, attempts to visualize the ever-drifting, never exactly accurate workings of GPS.
[Image: From Satellite Lamps].
[Image: From Satellite Lamps].
[Image: From Satellite Lamps].
Friday, October 17. 2014
Motion, audio, and location data harvested from a smartphone can be analyzed to accurately predict stress or depression.
By Tom Simonite
Many smartphone apps use a device’s sensors to try to measure people’s physical well-being, for example by counting every step they take. A new app developed by researchers at Dartmouth College suggests that a phone’s sensors can also be used to peek inside a person’s mind and gauge mental health.
When 48 students let the app collect information from their phones for an entire 10-week term, patterns in the data matched up with changes in stress, depression, and loneliness that showed up when they took the kind of surveys doctors use to assess their patients’ mood and mental health. Trends in the phone data also correlated with students’ grades.
The results suggest that smartphone apps could offer people and doctors new ways to manage mental well-being, says Andrew Campbell, the Dartmouth professor who led the research.
Previous studies have shown that custom-built mobile gadgets could indirectly gauge mental states. The Dartmouth study, however, used Android smartphones like those owned by millions of people, says Campbell. “We’re the first to use standard phones and sensors that are just carried without any user interaction,” he says. A paper on the research was presented last week at the ACM International Joint Conference on Pervasive and Ubiquitous Computing in Seattle.
Campbell’s app, called StudentLife, collects data including a phone’s motion and location and the timing of calls and texts, and occasionally activates the microphone on a device to run software that can tell if a conversation is taking place nearby. Algorithms process that information into logs of a person’s physical activity, communication patterns, sleeping patterns, visits to different places, and an estimate of how often they were involved in face-to-face conversation. Many changes in those patterns were found to correlate significantly with changes in measures of depression, loneliness, and stress. For example, decline in exposure to face-to-face conversations was indicative of depression.
The surveys used as a benchmark for mental health in the study are more normally used by doctors to assess patients who seek help for mental health conditions. In the future, data from a person’s phone could provide a richer picture to augment a one-off survey when a person seeks help, says Campbell. He is also planning further research into how data from his app might be used to tip off individuals or their caregivers when behavioral patterns indicate that their mental health could be changing. In the case of students, that approach could provide a way to reduce dropout rates or help people improve their academic performance, says Campbell.
“Intervention is the next step,” he says. “It could be something simple like telling a person they should go and engage in conversations to improve their mood, or that, statistically, if you party only three nights a week you will get more decent grades.” Campbell is also working on a study testing whether a similar app could help predict relapses in people with schizophrenia.
A startup called Ginger.io with an app similar to Campbell’s is already testing similar ideas with some health-care providers. In one trial with diabetics, changes in a person’s behavior triggered an alert to nurses, who reach out to make sure that the patient was adhering to his medication (see “Smartphone Tracker Gives Doctors Remote Viewing Powers”).
Anmol Madan, CEO and cofounder of Ginger.io, says the Dartmouth study adds to the evidence that those ideas are valuable. However, he notes, much larger studies are needed to really convince doctors and health-care providers to adopt a new approach. Ginger.io has found similar associations between its own data and clinical scales for depression, says Madan, although results have not been published.
Both Ginger.io and the Dartmouth work were inspired by research at the MIT Media Lab that established the idea that data from personal devices offers a new way to study human behavior (see “TR10: Social Physics”). Yaniv Altshuler, a researcher who helped pioneer that approach, says the Dartmouth study is an interesting addition to that body of work, but it’s also a reminder that there will be downsides to the mobile data trove. Being able to use mobile devices to learn very sensitive information about people could raise new privacy risks.
Campbell—who got clearance for his study from an ethical review board—notes that his results show how existing privacy rules can be left behind by data mining. A health-care provider collecting data using standard mental health surveys would be bound by HIPAA data privacy regulations in the United States. It’s less clear what rules apply when that same data is derived from a phone app. “If you have signals you can use to work out, say, that I am a manic depressive, what governs use of that data is not well accepted,” he says.
Whatever the answer, apps that log the kind of rich data Campbell collected are likely to become more common. Smartphone sensors have become much more energy-efficient, so detailed, round-the-clock data logging is now feasible without wiping out battery life. “As of six months ago phones got to the point where we could do 24/7 sensing,” says Campbell. “All the technology has now arrived that you can do these things.”
Note: are we all on our way, not to LA, but to HER... ?
Tablets and laptops coming later this year will be able to constantly listen for voice commands thanks to new chips from Intel.
By Tom Simonite
New processors: A silicon wafer etched with Intel’s Core M mobile chips.
A new line of mobile chips unveiled by Intel today makes it possible to wake up a laptop or tablet simply by saying “Hello, computer.” Once it has been awoken, the computer can operate as a voice-controlled virtual assistant. You might call out “Hello, computer, what is the weather forecast today?” while getting out of bed.
Tablets and lightweight laptops based on the new Core M line of chips will go on sale at the end of this year. They can constantly listen for voice instructions thanks to a component known as a digital signal processor core that’s dedicated to processing audio with high efficiency and minimal power use.
“It doesn’t matter what state the system will be in, it will be listening all the time,” says Ed Gamsaragan, an engineer at Intel. “You could be actively doing work or it could be in standby.”
It is possible to set any two- or three-word phrase to rouse a computer with a Core M chip. A device can also be trained to respond only to a specific voice. The voice-print feature isn’t accurate enough to replace a password, but it could prevent a device from being accidentally woken up, says Gamsaragan. If coupled with another biometric measure, such as webcam with facial recognition, however, a voice command could work as a security mechanism, he says.
Manufacturers will decide how to implement the voice features in Intel’s Core M chips in devices that will appear on shelves later this year.
The wake-on-voice feature is compatible with any operating system. That means it could be possible to summon Microsoft’s virtual assistant Cortana in Windows, or Google’s voice search functions in Chromebook devices.
The only mobile device on the market today that can constantly listen for commands is the Moto X smartphone from Motorola (see “The Era of Ubiquitous Listening Dawns”). It has a dedicated audio chip that constantly listens for the command “OK, Google,” which activates the Google search app.
Intel’s Core M chips are based on the company’s new generation of smaller transistors, with features as small as 14 nanometers. This new architecture makes chips more power efficient and cooler than earlier generations, so Core M devices don’t require cooling fans.
Intel says that the 14-nanometer architecture will make it possible to make laptops and tablets much thinner than they are today. This summer the company showed off a prototype laptop that is only 7.2 millimeters (0.28 inches) thick. That’s slightly thinner than Apple’s iPad Air, which is 7.5 millimeters thick, but Intel’s prototype packed considerably more computing power.
Wednesday, October 15. 2014
Note: after the zoning for drones within cities, will we develop them with specific "city marks" dedicated for driverless cars? It reminds me a bit of this design research project done a few years ago, The New Robot Domesticity, which purpose was to design objects so that robots could also recognized/use them. Further away, it also remind me of a workshop we organized at the ECAL back in 2005 with researcher Frederic Kaplan (now head of Digital Humanities at EPFL) which purpose was to design artefacts for the Sony Aibo (a doc. video here). This later prtoject was realized in the frame of the research project Variable Environment.
Tricky intersections and rogue mechanical pedestrians will provide a testing area for automated and connected cars.
By Will Knight
The site of Ann Arbor’s driverless town, currently under construction.
A mocked-up set of busy streets in Ann Arbor, Michigan, will provide the sternest test yet for self-driving cars. Complex intersections, confusing lane markings, and busy construction crews will be used to gauge the aptitude of the latest automotive sensors and driving algorithms; mechanical pedestrians will even leap into the road from between parked cars so researchers can see if they trip up onboard safety systems.
The urban setting will be used to create situations that automated driving systems have struggled with, such as subtle driver-pedestrian interactions, unusual road surfaces, tunnels, and tree canopies, which can confuse sensors and obscure GPS signals.
“If you go out on the public streets you come up against rare events that are very challenging for sensors,” says Peter Sweatman, director of the University of Michigan’s Mobility Transformation Center, which is overseeing the project. “Having identified challenging scenarios, we need to re-create them in a highly repeatable way. We don’t want to be just driving around the public roads.”
Google and others have been driving automated cars around public roads for several years, albeit with a human ready to take the wheel if necessary. Most automated vehicles use accurate digital maps and satellite positioning, together with a suite of different sensors, to navigate safely.
Highway driving, which is less complex than city driving, has proved easy enough for self-driving cars, but busy downtown streets—where cars and pedestrians jockey for space and behave in confusing and surprising ways—are more problematic.
“I think it’s a great idea,” says John Leonard, a professor at MIT who led the development of a self-driving vehicle for a challenge run by DARPA in 2007. “It is important for us to try to collect statistically meaningful data about the performance of self-driving cars. Repeated operations—even in a small-scale environment—can yield valuable data sets for testing and evaluating new algorithms.”
The simulation is being built on the edge of the University of Michigan’s campus with funding from the Michigan Department of Transportation and 13 companies involved with developing automated driving technology. It is scheduled to open next spring. It will consist of four miles of roads with 13 different intersections.
Even Google, which has an ambitious vision of vehicle automation, acknowledges that urban driving is a significant challenge. Speaking at an event in California this July, Chris Urmson, who leads the company’s self-driving car project, said several common urban situations remain thorny (see “Urban Jungle a Tough Challenge for Google’s Autonomous Car”). Speaking with MIT Technology Review last month, Urmson gave further details about as-yet-unsolved scenarios (see “Hidden Obstacles for Google’s Self-Driving Cars”).
Such challenges notwithstanding, the first automated cars will go into production shortly. General Motors announced last month that a 2017 Cadillac will be the first car to offer entirely automated driving on highways. It’s not yet clear how the system will work—for example, how it will ensure that the driver isn’t too distracted to take the wheel in an emergency, or under what road conditions it might refuse to take the wheel—but in some situations, the car’s Super Cruise system will take care of steering, braking, and accelerating. Another technology to be tested in the simulated town is vehicle-to-vehicle communications. The University of Michigan recently concluded a government-funded study in Ann Arbor involving thousands of vehicles equipped with transmitters that broadcast position, direction of travel, speed, and other information to other vehicles and to city infrastructure. The trial showed that vehicle-to-vehicle and vehicle-to-infrastructure communications could prevent many common accidents by providing advanced warning of a possible collision. “One of the interesting things, from our point of view, is what extra value you get by combining” automation and car-to-car communications, Sweatman says. “What happens when you put the two together—how much faster can you deploy it?”
Such challenges notwithstanding, the first automated cars will go into production shortly. General Motors announced last month that a 2017 Cadillac will be the first car to offer entirely automated driving on highways. It’s not yet clear how the system will work—for example, how it will ensure that the driver isn’t too distracted to take the wheel in an emergency, or under what road conditions it might refuse to take the wheel—but in some situations, the car’s Super Cruise system will take care of steering, braking, and accelerating.
Another technology to be tested in the simulated town is vehicle-to-vehicle communications. The University of Michigan recently concluded a government-funded study in Ann Arbor involving thousands of vehicles equipped with transmitters that broadcast position, direction of travel, speed, and other information to other vehicles and to city infrastructure. The trial showed that vehicle-to-vehicle and vehicle-to-infrastructure communications could prevent many common accidents by providing advanced warning of a possible collision.
“One of the interesting things, from our point of view, is what extra value you get by combining” automation and car-to-car communications, Sweatman says. “What happens when you put the two together—how much faster can you deploy it?”
Wednesday, October 08. 2014
Note: a few of our recent works and exhibitions are included in this promising young publication related to architectural thinking, Desierto, edited by Paper - Architectural Histamine in Madrid. At the editorial team invitation, I had the occasion to write a paper about Deterritorialized Living and one of its physical installation last year in Pau (France), during Pau Acces(s). We also took the occasion of the publication to give a glimpse of a related research project called Algorithmic Atomized Functioning.
By fabric | ch
From the editorial team:
"The temperature of the invisible and the desacralization of the air.
28° Celsius is the temperature at which protection becomes superfluous. It is also the temperature at which swimming pools are acclimatised. Within the limits of the this hygrothermal comfort zone, we do not require the intervention of our body's thermoregulatory mechanisms nor that of any external artificial thermal controls in order to feel pleasantly comfortable while carrying out a sedentary activity without clothing. 28° Celsius is thus the temperature at which clothing can disappear, just as architecture could."
Authors are Gabriel Ruiz-Larrea, Sean Lally, Philippe Rahm, Nerea Calvillo, myself, Helen Mallinson, Antonio Cobo, José Vella Castillo and Pauly Garcia-Masedo.
Editorial by gabriel Ruiz-Larrea (editor in chief). Editorial team composed of Natalia David, Nuria Úrculo, María Buey, Daniel Lacasta Fitzsimmons.
Inhabiting Deterritorialization, by Patrick Keller.
Desierto #3 and past issues can be ordered online on Paper bookstore.
(Page 1 of 186, totaling 1852 entries) » next page
fabric | rblg
fabric | rblg is the survey website of fabric | ch -- studio for architecture, interaction and research. We curate and re-blog articles, researches, exhibitions and projects that we notice during our everyday practice.