The third workshop we ran in the frame ofI&ICwith our guest researcherMatthew Plummer-Fernandez (Goldsmiths University) and the 2nd & 3rd year students (Ba) inMedia & Interaction Design (ECAL) ended last Friday (| rblg note: on the 21st of Nov.) with interesting results. The workshop focused on small situated computing technologies that could collect, aggregate and/or “manipulate” data in automated ways (bots) and which would certainly need to heavily rely on cloud technologies due to their low storage and computing capacities. So to say “networked data objects” that will soon become very common, thanks to cheap new small computing devices (i.e. Raspberry Pis for diy applications) or sensors (i.e. Arduino, etc.) The title of the workshop was “Botcave”, which objective was explained by Matthewin a previous post.
The choice of this context of work was defined accordingly to our overall research objective, even though we knew that it wouldn’t address directly the “cloud computing” apparatus — something we learned to be a difficult approachduring the second workshop –, but that it would nonetheless question its interfaces and the way we experience the whole service. Especially the evolution of this apparatus through new types of everyday interactions and data generation.
Matthew Plummer-Fernandez (#Algopop) during the final presentation at the end of the research workshop.
Through this workshop, Matthew and the students definitely raised the following points and questions:
1° Small situated technologies that will soon spread everywhere will become heavy users of cloud based computing and data storage, as they have low storage and computing capacities. While they might just use and manipulate existing data (like some of the workshop projects — i.e. #Good vs. #Evil or Moody Printer) they will altogether and mainly also contribute to produce extra large additional quantities of them (i.e. Robinson Miner). Yet, the amount of meaningful data to be “pushed” and “treated” in the cloud remains a big question mark, as there will be (too) huge amounts of such data –Lucien will probably post something later about this subject: “fog computing“–, this might end up with the need for interdisciplinary teams to rethink cloud architectures.
2° Stored data are becoming “alive” or significant only when “manipulated”. It can be done by “analog users” of course, but in general it is now rather operated by rules and algorithms of different sorts (in the frame of this workshop: automated bots). Are these rules “situated” as well and possibly context aware (context intelligent) –i.e.Robinson Miner? Or are they somehow more abstract and located anywhere in the cloud? Both?
3° These “Networked Data Objects” (and soon “Network Data Everything”) will contribute to “babelize” users interactions and interfaces in all directions, paving the way for new types of combinations and experiences (creolization processes) — i.e. The Beast, The Like Hotline, Simon Coins, The Wifi Cracker could be considered as starting phases of such processes–. Cloud interfaces and computing will then become everyday “things” and when at “house”, new domestic objects with which we’ll have totally different interactions (this last point must still be discussed though as domesticity might not exist anymore according to Space Caviar).
Moody Printer – (Alexia Léchot, Benjamin Botros)
Moody Printer remains a basic conceptual proposal at this stage, where a hacked printer, connected to a Raspberry Pi that stays hidden (it would be located inside the printer), has access to weather information. Similarly to human beings, its “mood” can be affected by such inputs following some basic rules (good – bad, hot – cold, sunny – cloudy -rainy, etc.) The automated process then search for Google images according to its defined “mood” (direct link between “mood”, weather conditions and exhaustive list of words) and then autonomously start to print them.
A different kind of printer combined with weather monitoring.
The Beast – (Nicolas Nahornyj)
Top: Nicolas Nahornyj is presenting his project to the assembly. Bottom: the laptop and “the beast”.
The Beast is a device that asks to be fed with money at random times… It is your new laptop companion. To calm it down for a while, you must insert a coin in the slot provided for that purpose. If you don’t comply, not only will it continue to ask for money in a more frequent basis, but it will also randomly pick up an image that lie around on your hard drive, post it on a popular social network (i.e. Facebook, Pinterest, etc.) and then erase this image on your local disk. Slowly, The Beast will remove all images from your hard drive and post them online…
A different kind of slot machine combined with private files stealing.
Robinson – (Anne-Sophie Bazard, Jonas Lacôte, Pierre-Xavier Puissant)
Top: Pierre-Xavier Puissant is looking at the autonomous “minecrafting” of his bot. Bottom: the proposed bot container that take on the idea of cubic construction. It could be placed in your garden, in one of your room, then in your fridge, etc.
Robinsonautomates the procedural construction of MineCraft environments. To do so, the bot uses local weather information that is monitored by a weather sensor located inside the cubic box, attached to a Raspberry Pi located within the box as well. This sensor is looking for changes in temperature, humidity, etc. that then serve to change the building blocks and rules of constructions inside MineCraft (put your cube inside your fridge and it will start to build icy blocks, put it in a wet environment and it will construct with grass, etc.)
A different kind of thermometer combined with a construction game.
Note: Matthew Plummer-Fernandez also produced two (auto)MineCraft bots during the week of workshop. The first one is building environment according to fluctuations in the course of different market indexes while the second one is trying to build “shapes” to escape this first envirnment. These two bots are downloadable from theGithub repository that was realized during the workshop.
#Good vs. #Evil – (Maxime Castelli)
Top: a transformed car racing game. Bottom: a race is going on between two Twitter hashtags, materialized by two cars.
#Good vs. #Evil is a quite straightforward project. It is also a hack of an existing two racing cars game. Yet in this case, the bot is counting iterations of two hashtags on Twitter: #Good and #Evil. At each new iteration of one or the other word, the device gives an electric input to its associated car. The result is a slow and perpetual race car between “good” and “evil” through their online hashtags iterations.
A different kind of data visualization combined with racing cars.
The “Like” Hotline – (Mylène Dreyer, Caroline Buttet, Guillaume Cerdeira)
Top: Caroline Buttet and Mylène Dreyer are explaining their project. The screen of the laptop, which is a Facebook account is beamed on the left outer part of the image. Bottom: Caroline Buttet is using a hacked phone to “like” pages.
The “Like” Hotline is proposing to hack a regular phone and install a hotline bot on it. Connected to its online Facebook account that follows a few personalities and the posts they are making, the bot ask questions to the interlocutor which can then be answered by using the keypad on the phone. After navigating through a few choices, the bot hotline help you like a post on the social network.
A different kind of hotline combined with a social network.
Simoncoin – (Romain Cazier)
Top: Romain Cazier introducing its “coin” project. Bottom: the device combines an old “Simon” memory game with the production of digital coins.
Simoncoin was unfortunately not finished at the end of the week of workshop but was thought out in force details that would be too long to explain in this short presentation. Yet the main idea was to use the game logic of Simon to generate coins. In a parallel to the Bitcoins that are harder and harder to mill, Simon Coins are also more and more difficult to generate due to the game logic.
Another different kind of money combined with a memory game.
The Wifi Cracker – (Bastien Girshig, Martin Hertig)
Top: Bastien Girshig and Martin Hertig (left of Matthew Plummer-Fernandez) presenting. Middle and Bottom: the wifi password cracker slowly diplays the letters of the wifi password.
The Wifi Cracker is an object that you can independently leave in a space. It furtively looks a little bit like a clock, but it won’t display the time. Instead, it will look for available wifi networks in the area and start try to find their protected password (Bastien and Martin found a ready made process for that). The bot will test all possible combinations and it will take time. Once the device will have found the working password, it will use its round display to transmit the password. Letter by letter and slowly as well.
A different kind of cookoo clock combined with a password cracker.
Acknowledgments:
Lots of thanks to Matthew Plummer-Fernandez for its involvement and great workshop direction; Lucien Langton for its involvment, technical digging into Raspberry Pis, pictures and documentation; Nicolas Nova and Charles Chalas (from HEAD) so as Christophe Guignard, Christian Babski and Alain Bellet for taking part or helping during the final presentation. A special thanks to the students from ECAL involved in the project and the energy they’ve put into it: Anne-Sophie Bazard, Benjamin Botros, Maxime Castelli, Romain Cazier, Guillaume Cerdeira, Mylène Dreyer, Bastien Girshig, Jonas Lacôte, Alexia Léchot, Nicolas Nahornyj, Pierre-Xavier Puissant.
From left to right: Bastien Girshig, Martin Hertig (The Wifi Cracker project), Nicolas Nova, Matthew Plummer-Fernandez (#Algopop), a “mystery girl”, Christian Babski (in the background), Patrick Keller, Sebastian Vargas, Pierre Xavier-Puissant (Robinson Miner), Alain Bellet and Lucien Langton (taking the pictures…) during the final presentation on Friday.
Note: the workshop continues and should finish today. We'll document and publish results next week. As the workshop is all about small size and situated computing, Lucien Langton (assistant on the project) made a short tutorial about the way to set up your Pi. I'll also publish the Github repository that Matthew Plummer-Fernandez has set up.
The Bots are running! The second workshop of I&IC’s research study started yesterday with Matthew’s presentation to the students. A video of the presentation might be included in the post later on, but for now here’s the [pdf]: Botcaves
First prototypes setup by the students include bots playing Minecraft, bots cracking wifi’s, bots triggered by onboard IR Cameras. So far, some groups worked directly with Python scripts deployed via SSH into the Pi’s, others established a client-server connection between their Mac and their Pi by installing Processing on their Raspberry and finally some decided to start by hacking hardware to connect to their bots later.
The research process will be continuously documented during the week.
Note: a few of our recent works and exhibitions are included in this promising young publication related to architectural thinking, Desierto, edited by Paper - Architectural Histamine in Madrid. At the editorial team invitation, I had the occasion to write a paper about Deterritorialized Living and one of its physical installation last year in Pau (France), during Pau Acces(s). We also took the occasion of the publication to give a glimpse of a related research project called Algorithmic Atomized Functioning.
"The temperature of the invisible and the desacralization of the air.
28° Celsius is the temperature at which protection becomes superfluous. It is also the temperature at which swimming pools are acclimatised. Within the limits of the this hygrothermal comfort zone, we do not require the intervention of our body's thermoregulatory mechanisms nor that of any external artificial thermal controls in order to feel pleasantly comfortable while carrying out a sedentary activity without clothing. 28° Celsius is thus the temperature at which clothing can disappear, just as architecture could."
Authors are Gabriel Ruiz-Larrea, Sean Lally, Philippe Rahm, Nerea Calvillo, myself, Helen Mallinson, Antonio Cobo, José Vella Castillo and Pauly Garcia-Masedo.
Editorial by gabriel Ruiz-Larrea (editor in chief). Editorial team composed of Natalia David, Nuria Úrculo, María Buey, Daniel Lacasta Fitzsimmons.
Inhabiting Deterritorialization, by Patrick Keller, with images of Deterritorialized Living website, Deterritorialized Daylight installation (Pau, France) and Algorithmic Atomized Functioning.
Last week Google and Novartis announced that they’re teaming up to develop contact lenses that monitor glucose levels and automatically adjust their focus. But these could be just the start of a clever new product category. From cancer detection and drug delivery to reality augmentation and night vision, our eyes offer unique opportunities for both health monitoring and enhancement.
“Now is the time to put a little computer and a lot of miniaturized technologies in the contact lens,” says Franck Leveiller, head of research and development in the Novartis eye care division.
One of the Novartis-Google prototype lenses contains a device about the size of a speck of glitter that measures glucose in tears. A wireless antenna then transmits the measurements to an external device. It’s designed to ease the burden of diabetics who otherwise have to prick their fingers to test their blood sugar levels.
“I have many patients that are managing diabetes, and they described it as having a part-time job. It’s so arduous to monitor,” says Thomas Quinn, who is head of the American Optometric Association’s contact lens and cornea section. “To have a way that patients can do that more easily and get some of their life back is really exciting.”
Glucose isn’t the only thing that can be measured from tears rather than a blood sample, says Quinn. Tears also contain a chemical called lacryglobin that serves as a biomarker for breast, colon, lung, prostate, and ovarian cancers. Monitoring lacryglobin levels could be particularly useful for cancer patients who are in remission, Quinn says.
Quinn also believes that drug delivery may be another use for future contact lenses. If a lens could dispense medication slowly over long periods of time, it would be better for patients than the short, concentrated doses provided by eye drops, he says. Such a lens is not easy to make, though (see “A Drug-Dispensing Lens”).
The autofocusing lens is in an earlier stage of development, but the goal is for it to adjust its shape depending on where the eye is looking, which would be especially helpful for people who need reading glasses. A current prototype of the lens uses photodiodes to detect light hitting the eye and determine whether the eye is directed downward. Leveiller says the team is also looking at other possible techniques.
Google and Novartis are far from the only ones interesting in upgrading the contact lens with such new capabilities. In Sweden, a company called Sensimed is working on a contact lens that measures the intraocular pressure that results from the liquid buildup in the eyes of glaucoma patients (see “Glaucoma Test in a Contact Lens”). And researchers at the University of Michigan are using graphene to make infrared-sensitive contact lenses—the vision, as it were, is that these might one day provide some form of night vision without the bulky headgear.
A Seattle-based company, Innovega, meanwhile, has developed a contact lens with a small area that filters specific bands of red, green, and blue light, giving users the ability to focus on a very small, high resolution display less than an inch away from their eyes without interfering with normal vision. That makes tiny displays attached to glasses look more like IMAX movie screens, says the company’s CEO, Steve Willey. Together, the lens and display are called iOptik.
Plenty of challenges still remain before we’re all walking around with glucose-monitoring, cancer-detecting, drug-delivering super night vision. Some prototypes out there are unusually thick, Quinn says, and some use traditional, rigid electronics where clear, flexible alternatives would be preferable. And, of course, all will have to pass regulatory approval to show they are safe and effective.
Jeff George, the head of the Novartis eye care division, is certainly optimistic about Google’s smart lens. “Google X’s team refers to themselves as a ‘moon shot factory.’ I’d view this as better than a moon shot given what we’ve seen,” he says.
What if the compass app in your phone didn’t just visually point north but actually seemed to pull your hand in that direction?
Two Japanese researchers will present tiny handheld devices that generate this kind of illusion at next month’s annual SIGGRAPH technology conference in Vancouver, British Columbia. The “force display” devices, called Traxion and Buru-Navi3, exploit the fact that a vibrating object is perceived as either pulling or pushing when held. The effect could be applied in navigation and gaming applications, and it suggests possibilities in mobile and wearable technology as well.
Tomohiro Amemiya, a cognitive scientist at NTT Communication Science Laboratories, began the Buru-Navi project in 2004, originally as a way to research how the brain handles sensory illusions. His initial prototype was roughly the size of a paperback novel and contained a crankshaft mechanism to generate vibration, similar to the motion of a locomotive wheel. Amemiya discovered that when the vibrations occurred asymmetrically at a frequency of 10 hertz—with the crankshaft accelerating sharply in one direction and then easing back more slowly—a distinctive pulling sensation emerged in the direction of the acceleration.
With his collaborator Hiroaki Gomi, Amemiya continued to modify and miniaturize the device into its current form, which is about the size of a wine cork and relies on a 40-hertz electromagnetic actuator similar to those found in smartphones. When pinched between the thumb and forefinger, Buru-Navi3 creates a continuous force illusion in one direction (toward or away from the user, depending on the device’s orientation).
The second device, called Traxion, was developed within the last year at the University of Tokyo by a team led by computer science researcher Jun Rekimoto. Traxion also generates a force illusion via an asymmetrically vibrating actuator held between the fingers. “We tested many users, and they said that it feels as if there’s some invisible string pulling or pushing the device,” Rekimoto says. “It’s a strong sensation of force.”
Both devices create a pulling force significant enough to guide a blindfolded user along a path or around corners. This way-finding application might be a perfect fit for the smart watches that Samsung, Google, and perhaps Apple are mobilizing to sell.
Haptics, which is the name for the technology behind tactile interfaces, has been explored for years in limited or niche applications. But Vincent Hayward, who researches haptics at the Pierre and Marie Curie University in Paris, says the technology is now “reaching a critical mass.” He adds, “Enough people are trying a sufficient number of ideas that the balance between novelty and utility starts shifting.”
Nonetheless, harnessing these kinesthetic effects for mainstream use is easier said than done. Amemiya admits that while his device generates strong force illusions while being pinched between a finger and thumb, the effect becomes much weaker if the device is merely placed in contact with the skin (as it would be in a watch).
The rise of even crude haptic wearable devices could accelerate this kind of scientific research, though. “A wearable system is always on, so it records data constantly,” Amemiya explains. “This can be very useful for understanding human perception.”
Seen everywhere online these days and now on | rblg too... Yet another "trojan horse" by Google to turn you into a mobile and indoor sensor for their own sake (data collection, if I said so). And soon will we be able to visit your flat or the ones of your friends through Google Maps/Earth, or through a constellation of other applications. After clicking at the door, of course.
But also, as it is often the case with such devices, an interesting tool as well... On top of which disruptive apps will be built that will further mix material and immaterial experiences and that will further locate parts of your "home" into "clouds".
As it consists in an open call for ideas, before they'll give away 200 dev. kits, don't hesitate to send them a line if you have an unpredictable one (this promiss to be very competing...)!
“Our current prototype is a 5” phone containing customized hardware and software designed to track the full 3D motion of the device, while simultaneously creating a map of the environment. These sensors allow the phone to make over a quarter million 3D measurements every second, updating its position and orientation in real-time, combining that data into a single 3D model of the space around you.
“It runs Android and includes development APIs to provide position, orientation, and depth data to standard Android applications written in Java, C/C++, as well as the Unity Game Engine. These early prototypes, algorithms, and APIs are still in active development. So, these experimental devices are intended only for the adventurous and are not a final shipping product….”
“Connected devices are central to our long-term strategy of injecting sophisticated computation and knowledge into everything. With the Wolfram Language we now have a way to describe and compute about things in the world. Connected devices are what we need to measure and interface with those things.
“In the end, we want every type of connected device to be seamlessly integrated with the Wolfram Language. And this will have all sorts of important consequences. But as we work toward this, there’s an obvious first step: we have to know what types of connected devices there actually are.
“So to have a way to answer that question, today we’re launching the Wolfram Connected Devices Project—whose goal is to work with device manufacturers and the technical community to provide a definitive, curated, source of systematic knowledge about connected devices….”
[The Audubon Society's micro-dredger, the John James, making new land in the Paul J. Rainey Wildlife Sanctuary in South Louisiana. Karen Westphal, Audubon’s Atchafalaya Basin program manager, will be speaking about this participatory micro-dredging project at DredgeFest Louisiana's symposium, which is this Saturday and Sunday at Loyola University in New Orleans.]
…south Louisiana is disappearing—terrifyingly fast. Sea-level rise, salt water intrusion, and canal excavation for industrial purposes have all combined with the constrainment of the river via flood control infrastructures to radically alter the balance between deposition, subsidence, and erosion. Instead of growing, the delta is now shrinking. Louisiana has lost over 1700 square miles of land (an area greater than the state of Rhode Island) since 1930. Without a change in course, it is anticipated to double that loss in the next fifty years. By 2100, subsidence, erosion, and sea level rise are projected to combine to leave New Orleans little more than an island fortress, effectively isolated in the rising Gulf of Mexico.
Moreover, even where the land itself may not be entirely submerged, the loss of barrier islands and coastal marshes exposes human settlements ever more precariously to the vicious effects of hurricanes and tropical storms, including the destructive waves known as storm surge.
This situation is entirely untenable. You thought Katrina was a terrible disaster? (It was.) Imagine what happens to New Orleans when a Category 6 hurricane hits in 2086, when even the highest ground in the French Quarter and the Garden District is barely above sea level and well below the ever-thickening barriers the Army Corps will throw up to protect America’s newest island.
In response to this apocalyptic but plausible threat, Louisiana is engaging in the world’s first large-scale experiment in restoration sedimentology. With the aid of components of the federal government like the Army Corps of Engineers and a bounty of funds earmarked for coastal restoration and protection as a result of payments owed by BP for the damages wrought by the 2010 Deep Horizon oil disaster, Louisiana has accelerated its nascent crash-program in experimental land-making machines, rapidly prototyping a wide array of weird and wonderful techno-infrastructural strategies for building land. This is an effort to cobble together a synthetic analog to the land-making machine that the Mississippi once was. If you want to understand the future of coastlines and deltas in a world of rising seas and surging storms, you should pay close attention to what is happening in Louisiana.
Researchers at the MIT Media Lab and the Max Planck Institutes have created a foldable, cuttable multi-touch sensor that works no matter how you cut it, allowing multi-touch input on nearly any surface.
In traditional sensors the connectors are laid out in a grid and when one part of the grid is damaged you lose sensitivity in a wide swathe of other sensors. This system lays the sensors out like a star which means that cut parts of the sensor only effect other parts down the line. For example, you cut the corners off of a square and still get the sensor to work or even cut all the way down to the main, central connector array and, as long as there are still sensors on the surface, it will pick up input.
The team that created it, Simon Olberding, Nan-Wei Gong, John Tiab, Joseph A. Paradiso, and Jürgen Steimle, write:
This very direct manipulation allows the end-user to easily make real-world objects and surfaces touch interactive,
to augment physical prototypes and to enhance paper craft. We contribute a set of technical principles for the design of printable circuitry that makes the sensor more robust against cuts, damages and removed areas. This includes
novel physical topologies and printed forward error correction.
You can read the research paper here but this looks to be very useful in the DIY hacker space as well as for flexible, wearable projects that require some sort of multi-touch input. While I can’t imagine we need shirts made of this stuff, I could see a sleeve with lots of inputs or, say, a watch with a multi-touch band.
Don’t expect this to hit the next iWatch any time soon – it’s still very much in prototype stages but definitely looks quite cool.
The possible applications for drones are growing every day. From watching out for poachers in wildlife parks in Africa to delivering textbooks to students, the autonomous flying machines are tackling problems both big and small. The ability for the drones to have onboard sensors and HD cameras makes them ideal tools for mapping and surveillance.
Taking that idea to the extreme, engineers from senseFly, partnered with Drone Adventures, Pix4D and Mapbox, were able to create a digital model of the Matterhorn with a 20-cm resolution in three dimensions. Two teams took the company's eBee drones to the mountain with Team 1 hiking to the summit and launching the devices to fly around the top of the peak. Team 2 launched eBees from the bottom of the mountain to cover the lower parts of the mountain.
SenseFly says, "The main challenges successfully overcome were to demonstrate the mapping capabilities of minidrones at a very high altitude and in mountainous terrain where 3D flight planning is essential, all the while coping with the turbulences typically encountered in mountainous environments."
For the project, 11 flights were made totaling 340 minutes. The drones took 2,188 photos and created an HD point-cloud with 3 million datapoints. The company's eMotion2 software provided the ground control for the flights, automatically creating flight paths for the multiple drones.
This blog is the survey website of fabric | ch - studio for architecture, interaction and research.
We curate and reblog articles, researches, writings, exhibitions and projects that we notice and find interesting during our everyday practice and readings.
Most articles concern the intertwined fields of architecture, territory, art, interaction design, thinking and science. From time to time, we also publish documentation about our own work and research, immersed among these related resources and inspirations.
This website is used by fabric | ch as archive, references and resources. It is shared with all those interested in the same topics as we are, in the hope that they will also find valuable references and content in it.