Tuesday, August 02. 2016
By fabric | ch
As we continue to lack a decent search engine on this blog and as we don't use a "tag cloud" ... This post could help navigate through the updated content on | rblg (as of 07.2016), via all its tags!
HERE ARE ALL THE CURRENT TAGS TO NAVIGATE ON | RBLG BLOG:
(to be seen just below if you're navigating on the blog's page or here for rss readers)
Posted by Patrick Keller in fabric | ch at 16:58
Defined tags for this entry: 3d, activism, advertising, agriculture, air, animation, applications, archeology, architects, architecture, art, art direction, artificial reality, artists, atmosphere, automation, behaviour, bioinspired, biotech, blog, body, books, brand, character, citizen, city, climate, clips, code, cognition, collaboration, commodification, communication, community, computing, conditioning, conferences, consumption, content, control, craft, culture & society, curators, customization, data, density, design, design (environments), design (fashion), design (graphic), design (interactions), design (motion), design (products), designers, development, devices, digital, digital fabrication, digital life, digital marketing, dimensions, direct, display, documentary, earth, ecal, ecology, economy, electronics, energy, engineering, environment, equipment, event, exhibitions, experience, experimentation, fabric | ch, farming, fashion, fiction, films, food, form, franchised, friends, function, future, gadgets, games, garden, generative, geography, globalization, goods, hack, hardware, harvesting, health, history, housing, hybrid, identification, illustration, images, information, infrastructure, installations, interaction design, interface, interferences, kinetic, knowledge, landscape, language, law, life, lighting, localization, localized, magazines, make, mapping, marketing, mashup, materials, media, mediated, mind, mining, mobile, mobility, molecules, monitoring, monography, movie, museum, music, nanotech, narrative, nature, networks, neurosciences, opensource, operating system, participative, particles, people, perception, photography, physics, physiological, politics, pollution, presence, print, privacy, product, profiling, projects, psychological, public, publishing, reactive, real time, recycling, research, resources, responsive, ressources, robotics, santé, scenography, schools, science & technology, scientists, screen, search, security, semantic, services, sharing, shopping, signage, smart, social, society, software, solar, sound, space, speculation, statement, surveillance, sustainability, tactile, tagging, tangible, targeted, teaching, technology, tele-, telecom, territory, text, textile, theory, thinkers, thinking, time, tools, topology, tourism, toys, transmission, trend, typography, ubiquitous, urbanism, users, variable, vernacular, video, viral, vision, visualization, voice, vr, war, weather, web, wireless, writing
Sunday, December 14. 2014
The third workshop we ran in the frame of I&IC with our guest researcher Matthew Plummer-Fernandez (Goldsmiths University) and the 2nd & 3rd year students (Ba) in Media & Interaction Design (ECAL) ended last Friday (| rblg note: on the 21st of Nov.) with interesting results. The workshop focused on small situated computing technologies that could collect, aggregate and/or “manipulate” data in automated ways (bots) and which would certainly need to heavily rely on cloud technologies due to their low storage and computing capacities. So to say “networked data objects” that will soon become very common, thanks to cheap new small computing devices (i.e. Raspberry Pis for diy applications) or sensors (i.e. Arduino, etc.) The title of the workshop was “Botcave”, which objective was explained by Matthew in a previous post.
The choice of this context of work was defined accordingly to our overall research objective, even though we knew that it wouldn’t address directly the “cloud computing” apparatus — something we learned to be a difficult approachduring the second workshop –, but that it would nonetheless question its interfaces and the way we experience the whole service. Especially the evolution of this apparatus through new types of everyday interactions and data generation.
Matthew Plummer-Fernandez (#Algopop) during the final presentation at the end of the research workshop.
Through this workshop, Matthew and the students definitely raised the following points and questions:
1° Small situated technologies that will soon spread everywhere will become heavy users of cloud based computing and data storage, as they have low storage and computing capacities. While they might just use and manipulate existing data (like some of the workshop projects — i.e. #Good vs. #Evil or Moody Printer) they will altogether and mainly also contribute to produce extra large additional quantities of them (i.e. Robinson Miner). Yet, the amount of meaningful data to be “pushed” and “treated” in the cloud remains a big question mark, as there will be (too) huge amounts of such data –Lucien will probably post something later about this subject: “fog computing“–, this might end up with the need for interdisciplinary teams to rethink cloud architectures.
2° Stored data are becoming “alive” or significant only when “manipulated”. It can be done by “analog users” of course, but in general it is now rather operated by rules and algorithms of different sorts (in the frame of this workshop: automated bots). Are these rules “situated” as well and possibly context aware (context intelligent) –i.e.Robinson Miner? Or are they somehow more abstract and located anywhere in the cloud? Both?
3° These “Networked Data Objects” (and soon “Network Data Everything”) will contribute to “babelize” users interactions and interfaces in all directions, paving the way for new types of combinations and experiences (creolization processes) — i.e. The Beast, The Like Hotline, Simon Coins, The Wifi Cracker could be considered as starting phases of such processes–. Cloud interfaces and computing will then become everyday “things” and when at “house”, new domestic objects with which we’ll have totally different interactions (this last point must still be discussed though as domesticity might not exist anymore according to Space Caviar).
Moody Printer – (Alexia Léchot, Benjamin Botros)
Moody Printer remains a basic conceptual proposal at this stage, where a hacked printer, connected to a Raspberry Pi that stays hidden (it would be located inside the printer), has access to weather information. Similarly to human beings, its “mood” can be affected by such inputs following some basic rules (good – bad, hot – cold, sunny – cloudy -rainy, etc.) The automated process then search for Google images according to its defined “mood” (direct link between “mood”, weather conditions and exhaustive list of words) and then autonomously start to print them.
A different kind of printer combined with weather monitoring.
The Beast – (Nicolas Nahornyj)
Top: Nicolas Nahornyj is presenting his project to the assembly. Bottom: the laptop and “the beast”.
The Beast is a device that asks to be fed with money at random times… It is your new laptop companion. To calm it down for a while, you must insert a coin in the slot provided for that purpose. If you don’t comply, not only will it continue to ask for money in a more frequent basis, but it will also randomly pick up an image that lie around on your hard drive, post it on a popular social network (i.e. Facebook, Pinterest, etc.) and then erase this image on your local disk. Slowly, The Beast will remove all images from your hard drive and post them online…
A different kind of slot machine combined with private files stealing.
Robinson – (Anne-Sophie Bazard, Jonas Lacôte, Pierre-Xavier Puissant)
Top: Pierre-Xavier Puissant is looking at the autonomous “minecrafting” of his bot. Bottom: the proposed bot container that take on the idea of cubic construction. It could be placed in your garden, in one of your room, then in your fridge, etc.
Robinson automates the procedural construction of MineCraft environments. To do so, the bot uses local weather information that is monitored by a weather sensor located inside the cubic box, attached to a Raspberry Pi located within the box as well. This sensor is looking for changes in temperature, humidity, etc. that then serve to change the building blocks and rules of constructions inside MineCraft (put your cube inside your fridge and it will start to build icy blocks, put it in a wet environment and it will construct with grass, etc.)
A different kind of thermometer combined with a construction game.
Note: Matthew Plummer-Fernandez also produced two (auto)MineCraft bots during the week of workshop. The first one is building environment according to fluctuations in the course of different market indexes while the second one is trying to build “shapes” to escape this first envirnment. These two bots are downloadable from theGithub repository that was realized during the workshop.
#Good vs. #Evil – (Maxime Castelli)
Top: a transformed car racing game. Bottom: a race is going on between two Twitter hashtags, materialized by two cars.
#Good vs. #Evil is a quite straightforward project. It is also a hack of an existing two racing cars game. Yet in this case, the bot is counting iterations of two hashtags on Twitter: #Good and #Evil. At each new iteration of one or the other word, the device gives an electric input to its associated car. The result is a slow and perpetual race car between “good” and “evil” through their online hashtags iterations.
A different kind of data visualization combined with racing cars.
The “Like” Hotline – (Mylène Dreyer, Caroline Buttet, Guillaume Cerdeira)
Top: Caroline Buttet and Mylène Dreyer are explaining their project. The screen of the laptop, which is a Facebook account is beamed on the left outer part of the image. Bottom: Caroline Buttet is using a hacked phone to “like” pages.
The “Like” Hotline is proposing to hack a regular phone and install a hotline bot on it. Connected to its online Facebook account that follows a few personalities and the posts they are making, the bot ask questions to the interlocutor which can then be answered by using the keypad on the phone. After navigating through a few choices, the bot hotline help you like a post on the social network.
A different kind of hotline combined with a social network.
Simoncoin – (Romain Cazier)
Top: Romain Cazier introducing its “coin” project. Bottom: the device combines an old “Simon” memory game with the production of digital coins.
Simoncoin was unfortunately not finished at the end of the week of workshop but was thought out in force details that would be too long to explain in this short presentation. Yet the main idea was to use the game logic of Simon to generate coins. In a parallel to the Bitcoins that are harder and harder to mill, Simon Coins are also more and more difficult to generate due to the game logic.
Another different kind of money combined with a memory game.
The Wifi Cracker – (Bastien Girshig, Martin Hertig)
Top: Bastien Girshig and Martin Hertig (left of Matthew Plummer-Fernandez) presenting. Middle and Bottom: the wifi password cracker slowly diplays the letters of the wifi password.
The Wifi Cracker is an object that you can independently leave in a space. It furtively looks a little bit like a clock, but it won’t display the time. Instead, it will look for available wifi networks in the area and start try to find their protected password (Bastien and Martin found a ready made process for that). The bot will test all possible combinations and it will take time. Once the device will have found the working password, it will use its round display to transmit the password. Letter by letter and slowly as well.
A different kind of cookoo clock combined with a password cracker.
Lots of thanks to Matthew Plummer-Fernandez for its involvement and great workshop direction; Lucien Langton for its involvment, technical digging into Raspberry Pis, pictures and documentation; Nicolas Nova and Charles Chalas (from HEAD) so as Christophe Guignard, Christian Babski and Alain Bellet for taking part or helping during the final presentation. A special thanks to the students from ECAL involved in the project and the energy they’ve put into it: Anne-Sophie Bazard, Benjamin Botros, Maxime Castelli, Romain Cazier, Guillaume Cerdeira, Mylène Dreyer, Bastien Girshig, Jonas Lacôte, Alexia Léchot, Nicolas Nahornyj, Pierre-Xavier Puissant.
From left to right: Bastien Girshig, Martin Hertig (The Wifi Cracker project), Nicolas Nova, Matthew Plummer-Fernandez (#Algopop), a “mystery girl”, Christian Babski (in the background), Patrick Keller, Sebastian Vargas, Pierre Xavier-Puissant (Robinson Miner), Alain Bellet and Lucien Langton (taking the pictures…) during the final presentation on Friday.
Friday, November 21. 2014
Note: a message from Matthew on Tuesday about his ongoing I&IC workshop. More resources to come there by the end of the week, as students are looking into many different directions!
Note: the workshop continues and should finish today. We'll document and publish results next week. As the workshop is all about small size and situated computing, Lucien Langton (assistant on the project) made a short tutorial about the way to set up your Pi. I'll also publish the Github repository that Matthew Plummer-Fernandez has set up.
The Bots are running! The second workshop of I&IC’s research study started yesterday with Matthew’s presentation to the students. A video of the presentation might be included in the post later on, but for now here’s the [pdf]: Botcaves
First prototypes setup by the students include bots playing Minecraft, bots cracking wifi’s, bots triggered by onboard IR Cameras. So far, some groups worked directly with Python scripts deployed via SSH into the Pi’s, others established a client-server connection between their Mac and their Pi by installing Processing on their Raspberry and finally some decided to start by hacking hardware to connect to their bots later.
The research process will be continuously documented during the week.
Connecting to Pi via Proce55ing
Wednesday, November 19. 2014
| rblg note: following my previous post about the design research project we are leading with Nicolas Nova, a workshop is going on this week at the ECAL with our guest contributor Matthew Plummer-Frenandez (aka #Algopop). I'll reblog here during the coming days what's happening on our parallel blog (iiclouds.org)
Note: I publish here the brief that Matthew Plummer-Fernandez (a.k.a. Algopop) sent me before the workshop he’ll lead next week (17-21.11) with Media & Interaction Design students from 2nd and 3rd year Ba at the ECAL.
This workshop will take place in the frame of the I&IC research project, for which we had the occasion to exchange together prior to the workshop. It will investigate the idea of very low power computing, situated processing, data sensing/storage and automatized data treatment (“bots”) that could be highly distributed into everyday life objects or situations. While doing so, the project will undoubtedly address the idea of “networked objects”, which due to the low capacities of their computing parts will become major consumers of cloud based services (computing power, storage). Yet, following the hypothesis of the research, what kind of non-standard networked objects/situations based on what king of decentralized, personal cloud architecture?
The subject of this workshop explains some recent posts that could serve as resources or tools for this workshop, as the students will work around personal “bots” that will gather, process, host and expose data.
Stay tuned for more!
Botcaves (by Matthew Plummer-Fernandez)
Algorithmic and autonomous software agents known as bots are increasingly participating in everyday life. Bots can potentially gather data from both physical and digital activity, store and share data in the ‘cloud’, and develop ways to communicate and learn from their databases. In essence bots can animate data, making it useful, interactive, visual or legible. Bots although software-based require hardware from which to run from, and it is this underexplored crossover between the physical and digital presence of bots that this workshop investigates.
You will be asked to design a physical ‘housing’ or ‘interface’, either bespoke or hacked from existing objects, for your personal bots to run from. These botcaves would be present in the home, workspace or other, permitting novel interactions between the digital and physical environments that these bots inhabit.
Raspberry Pis, template bot code, APIs, cloud storage, existing services (Twitter, IFTTT, etc) and physical elements (sensors, lights, cameras, etc) may be used in the workshop.
British/ Colombian Artist and Designer Matthew Plummer-Fernandez makes work that critically and playfully examines sociocultural entanglements with technologies. His current interests span algorithms, bots, automation, copyright, 3D files and file-sharing. He was awarded a Prix Ars Electronica Award of Distinction for the project Disarming Corruptor; an app for disguising 3D Print files as glitched artefacts. He is also known for his computational approach to aesthetics translated into physical sculpture.
For research purposes he runs Algopop, a popular tumblr that documents the emergence of algorithms in everyday life as well as the artists that respond to this context in their work. This has become the starting point to a practice-based PhD funded by the AHRC at Goldsmiths, University of London, where he is also a research associate at the Interaction Research Studio and a visiting tutor. He holds a BEng in Computer Aided Mechanical Engineering from Kings College London and an MA in Design Products from the Royal College of Art.
Friday, October 17. 2014
Note: are we all on our way, not to LA, but to HER... ?
Tablets and laptops coming later this year will be able to constantly listen for voice commands thanks to new chips from Intel.
By Tom Simonite
New processors: A silicon wafer etched with Intel’s Core M mobile chips.
A new line of mobile chips unveiled by Intel today makes it possible to wake up a laptop or tablet simply by saying “Hello, computer.” Once it has been awoken, the computer can operate as a voice-controlled virtual assistant. You might call out “Hello, computer, what is the weather forecast today?” while getting out of bed.
Tablets and lightweight laptops based on the new Core M line of chips will go on sale at the end of this year. They can constantly listen for voice instructions thanks to a component known as a digital signal processor core that’s dedicated to processing audio with high efficiency and minimal power use.
“It doesn’t matter what state the system will be in, it will be listening all the time,” says Ed Gamsaragan, an engineer at Intel. “You could be actively doing work or it could be in standby.”
It is possible to set any two- or three-word phrase to rouse a computer with a Core M chip. A device can also be trained to respond only to a specific voice. The voice-print feature isn’t accurate enough to replace a password, but it could prevent a device from being accidentally woken up, says Gamsaragan. If coupled with another biometric measure, such as webcam with facial recognition, however, a voice command could work as a security mechanism, he says.
Manufacturers will decide how to implement the voice features in Intel’s Core M chips in devices that will appear on shelves later this year.
The wake-on-voice feature is compatible with any operating system. That means it could be possible to summon Microsoft’s virtual assistant Cortana in Windows, or Google’s voice search functions in Chromebook devices.
The only mobile device on the market today that can constantly listen for commands is the Moto X smartphone from Motorola (see “The Era of Ubiquitous Listening Dawns”). It has a dedicated audio chip that constantly listens for the command “OK, Google,” which activates the Google search app.
Intel’s Core M chips are based on the company’s new generation of smaller transistors, with features as small as 14 nanometers. This new architecture makes chips more power efficient and cooler than earlier generations, so Core M devices don’t require cooling fans.
Intel says that the 14-nanometer architecture will make it possible to make laptops and tablets much thinner than they are today. This summer the company showed off a prototype laptop that is only 7.2 millimeters (0.28 inches) thick. That’s slightly thinner than Apple’s iPad Air, which is 7.5 millimeters thick, but Intel’s prototype packed considerably more computing power.
Wednesday, October 15. 2014
Note: after the zoning for drones within cities, will we develop them with specific "city marks" dedicated for driverless cars? It reminds me a bit of this design research project done a few years ago, The New Robot Domesticity, which purpose was to design objects so that robots could also recognized/use them. Further away, it also remind me of a workshop we organized at the ECAL back in 2005 with researcher Frederic Kaplan (now head of Digital Humanities at EPFL) which purpose was to design artefacts for the Sony Aibo (a doc. video here). This later prtoject was realized in the frame of the research project Variable Environment.
Tricky intersections and rogue mechanical pedestrians will provide a testing area for automated and connected cars.
By Will Knight
The site of Ann Arbor’s driverless town, currently under construction.
A mocked-up set of busy streets in Ann Arbor, Michigan, will provide the sternest test yet for self-driving cars. Complex intersections, confusing lane markings, and busy construction crews will be used to gauge the aptitude of the latest automotive sensors and driving algorithms; mechanical pedestrians will even leap into the road from between parked cars so researchers can see if they trip up onboard safety systems.
The urban setting will be used to create situations that automated driving systems have struggled with, such as subtle driver-pedestrian interactions, unusual road surfaces, tunnels, and tree canopies, which can confuse sensors and obscure GPS signals.
“If you go out on the public streets you come up against rare events that are very challenging for sensors,” says Peter Sweatman, director of the University of Michigan’s Mobility Transformation Center, which is overseeing the project. “Having identified challenging scenarios, we need to re-create them in a highly repeatable way. We don’t want to be just driving around the public roads.”
Google and others have been driving automated cars around public roads for several years, albeit with a human ready to take the wheel if necessary. Most automated vehicles use accurate digital maps and satellite positioning, together with a suite of different sensors, to navigate safely.
Highway driving, which is less complex than city driving, has proved easy enough for self-driving cars, but busy downtown streets—where cars and pedestrians jockey for space and behave in confusing and surprising ways—are more problematic.
“I think it’s a great idea,” says John Leonard, a professor at MIT who led the development of a self-driving vehicle for a challenge run by DARPA in 2007. “It is important for us to try to collect statistically meaningful data about the performance of self-driving cars. Repeated operations—even in a small-scale environment—can yield valuable data sets for testing and evaluating new algorithms.”
The simulation is being built on the edge of the University of Michigan’s campus with funding from the Michigan Department of Transportation and 13 companies involved with developing automated driving technology. It is scheduled to open next spring. It will consist of four miles of roads with 13 different intersections.
Even Google, which has an ambitious vision of vehicle automation, acknowledges that urban driving is a significant challenge. Speaking at an event in California this July, Chris Urmson, who leads the company’s self-driving car project, said several common urban situations remain thorny (see “Urban Jungle a Tough Challenge for Google’s Autonomous Car”). Speaking with MIT Technology Review last month, Urmson gave further details about as-yet-unsolved scenarios (see “Hidden Obstacles for Google’s Self-Driving Cars”).
Such challenges notwithstanding, the first automated cars will go into production shortly. General Motors announced last month that a 2017 Cadillac will be the first car to offer entirely automated driving on highways. It’s not yet clear how the system will work—for example, how it will ensure that the driver isn’t too distracted to take the wheel in an emergency, or under what road conditions it might refuse to take the wheel—but in some situations, the car’s Super Cruise system will take care of steering, braking, and accelerating. Another technology to be tested in the simulated town is vehicle-to-vehicle communications. The University of Michigan recently concluded a government-funded study in Ann Arbor involving thousands of vehicles equipped with transmitters that broadcast position, direction of travel, speed, and other information to other vehicles and to city infrastructure. The trial showed that vehicle-to-vehicle and vehicle-to-infrastructure communications could prevent many common accidents by providing advanced warning of a possible collision. “One of the interesting things, from our point of view, is what extra value you get by combining” automation and car-to-car communications, Sweatman says. “What happens when you put the two together—how much faster can you deploy it?”
Such challenges notwithstanding, the first automated cars will go into production shortly. General Motors announced last month that a 2017 Cadillac will be the first car to offer entirely automated driving on highways. It’s not yet clear how the system will work—for example, how it will ensure that the driver isn’t too distracted to take the wheel in an emergency, or under what road conditions it might refuse to take the wheel—but in some situations, the car’s Super Cruise system will take care of steering, braking, and accelerating.
Another technology to be tested in the simulated town is vehicle-to-vehicle communications. The University of Michigan recently concluded a government-funded study in Ann Arbor involving thousands of vehicles equipped with transmitters that broadcast position, direction of travel, speed, and other information to other vehicles and to city infrastructure. The trial showed that vehicle-to-vehicle and vehicle-to-infrastructure communications could prevent many common accidents by providing advanced warning of a possible collision.
“One of the interesting things, from our point of view, is what extra value you get by combining” automation and car-to-car communications, Sweatman says. “What happens when you put the two together—how much faster can you deploy it?”
Monday, April 28. 2014
Thursday, April 17. 2014
Via Computed·Blg via PCWorld
When Jules Verne wrote Around the World in Eighty Days, this probably isn’t what he had in mind: Google’s Project Loon announced last week one of its balloons had circumnavigated the Earth in 22 days.
Granted, we’re not talking a grand tour of the world here: The balloon flew in a loop over the open ocean surrounding Antarctica, starting at New Zealand. According to the Project Loon team, it was the latest accomplishment for the balloon fleet, which just achieved 500,000 kilometers of flight.
While it may seem like fun and games, Project Loon’s larger goal is to use high-altitude balloons to “connect people in rural and remote areas, help fill coverage gaps, and bring people back online after disasters.”
Currently, the project is test-flying balloons to learn more about wind patterns, and to test its balloon designs. In the past nine months, the project team has used data it’s accumulated during test flights to “refine our prediction models and are now able to forecast balloon trajectories twice as far in advance.”
It also modified the balloon’s air pump (which pumps air in and out of the balloon) to operate more efficiently, which in turn helped the balloon stay on course in this latest test run.
Project Loon’s next step toward universal Internet connection is to create “a ring of uninterrupted connectivity around the 40th southern parallel,” which it expects to pull off sometime this year.
Thursday, April 03. 2014
(Page 1 of 7, totaling 66 entries) » next page
fabric | rblg
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.
| rblg on Twitter